Support the model below: o1 o1-mini o1-preview Note that this model does not support stream mode, nor does it support setting parameters like temperature, top_p, n, presence_penalty, or frequency_penalty. We have made it compatible on the server side, but passing these parameters will be invalid.Token calculation method: Input:prompt_tokens Output:completion_tokens+reasoning_tokensOpenAI Guide OpenAI API ReferenceChat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation.Price List:https://302.ai/pricing_api/
Request
Header Params
Content-Type
string
required
Example:
application/json
Accept
string
required
Example:
application/json
Authorization
string
optional
Example:
Bearer {{YOUR_API_KEY}}
Body Params application/json
model
string
required
The ID of the model to be used. For detailed information on which models are applicable to the chat API, please view Model endpoint compatibility
messages
array [object {2}]
required
Generate messages in chat format for chat completions.
role
string
optional
content
string
optional
stop
string
optional
Up to 4 sequences where the API will stop generating further tokens.
max_tokens
integer
optional
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
user
string
optional
A unique identifier representing your end users, which helps OpenAI monitor and detect abuse. View more
{"id":"chatcmpl-123","object":"chat.completion","created":1677652288,"choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}