Support the model below: o1 o1-mini o1-preview Note that this model does not support stream mode, nor does it support setting parameters like temperature, top_p, n, presence_penalty, or frequency_penalty. We have made it compatible on the server side, but passing these parameters will be invalid.Token calculation method: Input:prompt_tokens Output:completion_tokens+reasoning_tokensOpenAI Guide OpenAI API ReferenceChat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation.Price List:https://302.ai/pricing_api/
{"id":"chatcmpl-123","object":"chat.completion","created":1677652288,"choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}