302.AI currently support: claude-3-haiku-20240307 claude-3-opus-20240229 claude-3-5-haiku-20241022 claude-3-5-sonnet-20240620 claude-3-5-sonnet-20241022 claude-3-7-sonnet-20250219 claude-3-7-sonnet-latestChat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation.Price List::https://302.ai/pricing_api/
Request
Header Params
Content-Type
string
required
Example:
application/json
Accept
string
required
Example:
application/json
Authorization
string
optional
Example:
Bearer {{YOUR_API_KEY}}
Body Params application/json
model
string
required
The ID of the model to be used. For detailed information on which models are applicable to the chat API, please view Model endpoint compatibility
messages
array [object {2}]
required
Generate messages in chat format for chat completions.
role
string
optional
content
string
optional
max_tokens
integer
required
The maximum number of tokens that can be generated in the chat completion.The total length of input tokens and generated tokens is limited by the model's context length.
temperature
integer
optional
What sampling temperature to use, ranging from 0 to 2. Higher values, such as 0.8, will make the output more random, while lower values, like 0.2, will make it more focused and deterministic. We generally recommend adjusting either this or top_p, but not both simultaneously.
top_p
integer
optional
An alternative to temperature sampling is nucleus sampling, where the model considers tokens within the top_p probability mass. For instance, top_p = 0.1 means only tokens within the top 10% probability mass are considered. We recommend adjusting either this or temperature, but not both simultaneously.
top_k
string
required
An alternative to temperature sampling
stream
boolean
optional
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by data: [DONE] message. For Example Code.,Please view OpenAI Cookbook.
stop_sequences
string
optional
Up to 4 sequences where the API will stop generating further tokens.
user
string
optional
A unique identifier representing your end users, which helps OpenAI monitor and detect abuse. View more
{"id":"chatcmpl-123","object":"chat.completion","created":1677652288,"choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}