Chat (o1 Series Model)
POST
/chat/completionsSupport the model below:
o1-mini
o1-preview
Note that this model does not support stream mode, nor does it support setting parameters like temperature, top_p, n, presence_penalty, or frequency_penalty. We have made it compatible on the server side, but passing these parameters will be invalid.
Token calculation method:
Input:prompt_tokens
Output:completion_tokens+reasoning_tokens
OpenAI Guide
OpenAI API Reference
Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation.
Price List:https://302.ai/pricing_api/
请求参数
The ID of the model to be used. For detailed information on which models are applicable to the chat API, please view Model endpoint compatibility
Generate messages in chat format for chat completions.
Up to 4 sequences where the API will stop generating further tokens.
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
A unique identifier representing your end users, which helps OpenAI monitor and detect abuse. View more
{
"model": "o1-mini",
"stream": false,
"messages": [
{
"role": "user",
"content": "hello"
}
]
}
示例代码
Responses
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}