OpenAI Guide OpenAI API ReferenceChat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation.Price List:https://302.ai/pricing_api/
Request
Header Params
Content-Type
string
required
Example:
application/json
Accept
string
required
Example:
application/json
Authorization
string
required
Place the API key generated in the management dashboard under API KEYS after 'Bearer', like 'Bearer sk-xxxx'
Example:
Bearer {{YOUR_API_KEY}}
Body Params application/json
model
string
required
The ID of the model to use. For more details on which models are compatible with the Chat API, refer to the Model Endpoint Compatibility Table
Use a sampling temperature between 0 and 2. Higher values (e.g., 0.8) will make the output more random, while lower values (e.g., 0.2) will make the output more focused and deterministic. We generally recommend altering this or top_p, but not both.
top_p
integer
optional
An alternative to temperature sampling is nucleus sampling, where the model considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature, but not both
n
integer
optional
How many chat completion options to generate for each input message
The API will stop generating up to 4 more sequences of tokens.
max_tokens
integer
optional
The maximum number of tokens to be generated for the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
presence_penalty
number
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the likelihood of the model discussing new topics. See more about frequency and presence penalties.
frequency_penalty
number
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text, reducing the likelihood of the model repeating the same line verbatim. See more about frequency and presence penalties.
logit_bias
null
optional
Modifies the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token IDs in the tokenizer) to an associated bias value ranging from -100 to 100. Mathematically, the bias is added to the logits generated by the model before sampling. The exact effect varies by model, but values between -1 and 1 should decrease or increase the likelihood of selection, while values like -100 or 100 should result in the relevant tokens being either prohibited or exclusively selected.
user
string
optional
A unique identifier for your end user, which can help OpenAI monitor and detect abuse. Learn more。
{"id":"chatcmpl-123","object":"chat.completion","created":1677652288,"choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}