# 302.AI API Document ## Docs - Large Language Model [API Migration Guide](https://doc-en.302.ai/doc-5030894.md): - Help Center [HTTP Status Codes](https://doc-en.302.ai/doc-5030895.md): - Help Center [List of supported languages for image translation](https://doc-en.302.ai/doc-5213777.md): ## API Docs - Large Language Model > Exclusive Feature > Search Online [Chat(Search online)](https://doc-en.302.ai/api-273308610.md): Add internet search capability to all models. There are two ways to enable this feature, and you can choose either one: - Large Language Model > Exclusive Feature > Depth-First Search [Chat(Depth-First Search)](https://doc-en.302.ai/api-270155422.md): Enhance deep search capabilities for all models: - Large Language Model > Exclusive Feature > Image Analysis [Chat(Image analysis)](https://doc-en.302.ai/api-260156326.md): adds image recognition capabilities to all models, and there are two ways to enable it, you can choose either one: - Large Language Model > Exclusive Feature > Reasoning mode [Chat(Reasoning mode)](https://doc-en.302.ai/api-266354843.md): Add inference ability to all models, just add the suffix -r1-fusion to the model name. - Large Language Model > Exclusive Feature > Link Parsing [Chat(Link Parsing)](https://doc-en.302.ai/api-272503066.md): Add webpage/file parsing capabilities to all models, with two available methods—choose either one: - Large Language Model > Exclusive Feature > Tool Invocation [Chat(tool invocation)](https://doc-en.302.ai/api-278668549.md): The 302 platform has added tool invocation capabilities (commonly known as Function Call) to all models. - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > User Management [Create User](https://doc-en.302.ai/api-255988308.md): In the Memobase system, create a new user - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > User Management [Get User](https://doc-en.302.ai/api-255988309.md): In the Memobase system, obtain a user's information - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > User Management [Update User](https://doc-en.302.ai/api-255988310.md): In the Memobase system, update a user's information - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > User Management [Delete User](https://doc-en.302.ai/api-255988311.md): In the Memobase system, delete existing users - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Data Management [Insert Data](https://doc-en.302.ai/api-255988312.md): In the Memobase system, insert short-term memory data for a user - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Data Management [Get Datas](https://doc-en.302.ai/api-255988313.md): In the Memobase system, obtain a user's short-term memory data list - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Data Management [Get Data](https://doc-en.302.ai/api-255988314.md): In the Memobase system, obtain a user's short-term memory data - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Data Management [Delete Data](https://doc-en.302.ai/api-255988315.md): In the Memobase system, delete a user's short-term memory data - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Memory Management [Flush Buffer (Generate Memory)](https://doc-en.302.ai/api-255988316.md): In the Memobase system, extract user short-term memory cache into long-term memory - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Memory Management [Get User Profile (Get Memory)](https://doc-en.302.ai/api-255988317.md): In the Memobase system, extract the corresponding user memory - Large Language Model > Exclusive Feature > Long-term memory (Beta) > Memobase > Memory Management [Delete User Profile (Delete Memory)](https://doc-en.302.ai/api-255988318.md): In the Memobase system, delete the corresponding user memory - Large Language Model > Exclusive Feature > Long-term memory (Beta) [Chat (Long-term Memory)](https://doc-en.302.ai/api-255482146.md): To add long-term memory functionality to any large model, simply include a `userid` parameter. - Large Language Model > Exclusive Feature > Simplified API [Chat (Simplified API)](https://doc-en.302.ai/api-207705101.md): 302.AI simplified the API. You only need to pass the model and message to get the output. - Large Language Model > Model Support [Models (List models)](https://doc-en.302.ai/api-261684084.md): List the currently available models and provide prices for each model. - Large Language Model > Model Support [Status(Model Status)](https://doc-en.302.ai/api-284770652.md): We monitor the first-token response time for some models. You can check the model service availability through this interface. - Large Language Model > OpenAI [Chat(Talk)](https://doc-en.302.ai/api-207705102.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat(Streamed return.)](https://doc-en.302.ai/api-239842865.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (gpt-4o Image Analysis)](https://doc-en.302.ai/api-207705107.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (gpt-4o Structured Output)](https://doc-en.302.ai/api-207705108.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (gpt-4o function call)](https://doc-en.302.ai/api-216495993.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (gpt-4-plus image analysis)](https://doc-en.302.ai/api-207705105.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (gpt-4-plus image generation)](https://doc-en.302.ai/api-207705106.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat(gpt-4o-image-generation modify image)](https://doc-en.302.ai/api-282181336.md): Model Name: gpt-4o-image-generation - Large Language Model > OpenAI [Chat (gpts model)](https://doc-en.302.ai/api-207705103.md): GPTs are customizable gpt-4 models launched by OpenAI, allowing users to design their own AI assistants for various scenarios, such as paper search, translation, code completion, image generation, etc. - Large Language Model > OpenAI [Chat (chatgpt-4o-latest)](https://doc-en.302.ai/api-207705104.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Large Language Model > OpenAI [Chat (o1 Series Model)](https://doc-en.302.ai/api-216495992.md): Support the model below: - Large Language Model > OpenAI [Chat (o3 Series Model)](https://doc-en.302.ai/api-258254135.md): Support o3-mini, o3-mini-2025-01-31 - Large Language Model > OpenAI [Chat(gpt-4o audio model)](https://doc-en.302.ai/api-225241341.md): This example demonstrates how to use the gpt-4o-audio-preview model. - Large Language Model > Anthropic [Chat(Talk)](https://doc-en.302.ai/api-207705109.md): 302.AI currently support: - Large Language Model > Anthropic [Chat(Analyze image)](https://doc-en.302.ai/api-207705110.md): 302.AI currently support: - Large Language Model > Anthropic [Chat(Function Call)](https://doc-en.302.ai/api-219124319.md): 302.AI currently support: - Large Language Model > Anthropic [Messages(Original format)](https://doc-en.302.ai/api-222603718.md): 302.AI currently support: - Large Language Model > Anthropic [Messages(Function Call)](https://doc-en.302.ai/api-226294287.md): 302.AI currently support: - Large Language Model > Anthropic [Messages(Thinking mode)](https://doc-en.302.ai/api-264179581.md): Now claude-3-7-sonnet-20250219 supports thinking mode, which can be opened through parameters - Large Language Model > Anthropic [Messages(128k output)](https://doc-en.302.ai/api-264179582.md): Now claude-3-7-sonnet-20250219 supports outputs up to 128k, which is more than 15 times longer than other Claude models. This ability to extend is especially effective for extended thinking use cases involving complex inference, rich code generation, and comprehensive content creation. - Large Language Model > Gemini [Chat(Talk)](https://doc-en.302.ai/api-207705111.md): **Google's latest Gemini-1.5-Pro version** - Large Language Model > Gemini [Chat(Analyze image)](https://doc-en.302.ai/api-224540599.md): **Google's latest gemini-1.5-pro version** - Large Language Model > Gemini [Chat(Image Generation)](https://doc-en.302.ai/api-274416667.md): Google's latest Gemini image generation model supports text-to-image and image-to-image generation. Simply pass the image link through image_url. - Large Language Model > China Model [Chat (Baidu ERNIE)](https://doc-en.302.ai/api-207705112.md): The latest AI model from Baidu - Large Language Model > China Model [Chat (Tongyi Qianwen)](https://doc-en.302.ai/api-207705113.md): Alibaba's latest AI model - Large Language Model > China Model [Chat (Tongyi Qianwen-VL)](https://doc-en.302.ai/api-207705114.md): Alibaba's latest AI model, multimodal - Large Language Model > China Model [Chat(Tongyi Qianwen-OCR)](https://doc-en.302.ai/api-242312995.md): The latest AI model from Alibaba, which is a large OCR recognition model trained based on Qwen-VL, aggregates various text-image recognition, analysis, and processing tasks into a unified model, providing powerful text-image recognition capabilities. - Large Language Model > China Model [Chat (Zhipu GLM-4)](https://doc-en.302.ai/api-207705115.md): Zhipu AI's latest AI model, from Tsinghua University - Large Language Model > China Model [Chat (Zhipu GLM-4V)](https://doc-en.302.ai/api-207705116.md): Zhipu AI's latest image recognition AI model, from Tsinghua University - Large Language Model > China Model [Chat (Baichuan AI)](https://doc-en.302.ai/api-207705117.md): Baichuan AI's model, from Sogou founder Wang Xiaochuan - Large Language Model > China Model [Chat (Moonshot AI)](https://doc-en.302.ai/api-207705118.md): Moonshot's latest AI model, also used by the Kimi application - Large Language Model > China Model [Chat (Moonshot AI-Vision)](https://doc-en.302.ai/api-255026507.md): Moonshot's latest AI model, also used by the Kimi application - Large Language Model > China Model [Chat (01.AI)](https://doc-en.302.ai/api-207705119.md): Latest AI model from 01.AI, created by former Google vice president Kai-Fu Lee. - Large Language Model > China Model [Chat (01.AI-VL)](https://doc-en.302.ai/api-207705120.md): Latest AI model from 01.AI, created by former Google vice president Kai-Fu Lee. - Large Language Model > China Model [Chat (DeepSeek)](https://doc-en.302.ai/api-207705121.md): From the well-known private equity giant Fantasy. - Large Language Model > China Model [Chat (DeepSeek-VL2)](https://doc-en.302.ai/api-246110346.md): DeepSeek's latest AI model is currently the most affordable domestically produced large language model, with prices as low as 1 RMB per 1M input tokens and 2 RMB per 1M output tokens. It is highly suitable for translation tasks and comes from the renowned private equity giant, Phantom Funds - Large Language Model > China Model [Chat (ByteDance Doubao)](https://doc-en.302.ai/api-207705122.md): ****ByteDance Doubao's latest AI model** - Large Language Model > China Model [Chat (ByteDance Doubao-Vision)](https://doc-en.302.ai/api-240583755.md): Byte Doubao's latest image recognition model - Large Language Model > China Model [Chat(ByteDance Doubao Image Generation)](https://doc-en.302.ai/api-275421475.md): We have modified Doubao's general_v2.1_L and seededit APIs to adapt to OpenAI's format, enabling image generation and image editing. - Large Language Model > China Model [Chat (Stepfun)](https://doc-en.302.ai/api-246448203.md): Stepfun's latest AI model - Large Language Model > China Model [Chat (Stepfun Multimodal)](https://doc-en.302.ai/api-207705123.md): **Stepfun's latest AI model** - Large Language Model > China Model [Chat (iFLYTEK Spark)](https://doc-en.302.ai/api-207705124.md): **Xunfei Spark's latest AI model** - Large Language Model > China Model [Chat (SenseTime)](https://doc-en.302.ai/api-207705125.md): **Latest AI model from SenseTime** - Large Language Model > China Model [Chat(Minimax)](https://doc-en.302.ai/api-240583947.md): Minimax's latest AI model - Large Language Model > China Model [Chat (Tencent Hunyuan)](https://doc-en.302.ai/api-207705127.md): **Tencent Hunyuan large-scale model** - Large Language Model > SiliconFlow [Chat(SiliconFlow)](https://doc-en.302.ai/api-252564719.md): Silicon Stream I has officially partnered with 302.AI to provide open-source model capabilities to all 302 users. - Large Language Model > Open Source Model [Chat(LLaMA3.3)](https://doc-en.302.ai/api-207705128.md): Meta's latest open-source model, reportedly surpassing the previous generation 405B - Large Language Model > Open Source Model [Chat(LLaMA3.2 multimodal)](https://doc-en.302.ai/api-219126691.md): **Meta's latest open source model** - Large Language Model > Open Source Model [Chat(LLaMA3.1)](https://doc-en.302.ai/api-207705129.md): **Meta's latest open source model** - Large Language Model > Open Source Model [Chat(Mixtral-8x7B)](https://doc-en.302.ai/api-207705130.md): Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation. - Large Language Model > Open Source Model [Chat(Mistral-Large-2411)](https://doc-en.302.ai/api-235469484.md): A chat model takes a series of messages as input and then returns model-generated messages as output. Although the chat format is designed to facilitate multi-turn conversations, it is equally useful for single-turn tasks without any dialogue. - Large Language Model > Open Source Model [Chat(Mistral-small-2503)](https://doc-en.302.ai/api-273448842.md): Mistral-small-2503, a multimodal small model from Mistral - Large Language Model > Open Source Model [Chat(Pixtral-Large-2411multimodal)](https://doc-en.302.ai/api-235464754.md): This example is to demonstrate how to use the pixtral-large-2411 model to analyze images. - Large Language Model > Open Source Model [Chat(Gemma-7B、Gemma-3-27b-it)](https://doc-en.302.ai/api-207705132.md): Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation. - Large Language Model > Open Source Model [Chat(Gemma2-9B)](https://doc-en.302.ai/api-207705135.md): Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation. - Large Language Model > Open Source Model [Chat(Command R+) ](https://doc-en.302.ai/api-207705133.md): Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation. - Large Language Model > Open Source Model [Chat(Qwen2)](https://doc-en.302.ai/api-207705134.md): Chat models take a series of messages as input and return a model-generated message as output. While the chat format is designed to make multi-turn conversations easy, it's just as useful for single-turn tasks without any conversation. - Large Language Model > Open Source Model [Chat(Qwen2.5)](https://doc-en.302.ai/api-217022578.md): Alibaba's latest open source model** - Large Language Model > Open Source Model [Chat(Qwen2.5-VL)](https://doc-en.302.ai/api-263728248.md): **Alibaba's latest open source model** - Large Language Model > Open Source Model [Chat(Llama-3.1-nemotron)](https://doc-en.302.ai/api-224540845.md): Nvidia's fine-tuned model, built on Llama-3.1, ranks just behind o1 in performance scores. - Large Language Model > Open Source Model [Chat(QwQ-32B、QwQ-Plus、QwQ-32B-Preview)](https://doc-en.302.ai/api-239086885.md): Alibaba's latest open-source model - Large Language Model > Expert Model [Chat(WiseDiag Medical Model)](https://doc-en.302.ai/api-261594739.md): Model from Zhidia Technology: https://wisediag.com/ - Large Language Model > Expert Model [Chat (ChatLaw Legal Model)](https://doc-en.302.ai/api-207705136.md): Peking University legal model - Large Language Model > Expert Model [Chat (Xuanyuan Financial Model)](https://doc-en.302.ai/api-207705137.md): Xuanyuan financial model - Large Language Model > Expert Model [Chat (Farui Legal Model)](https://doc-en.302.ai/api-207705138.md): Farui legal model - Large Language Model > Expert Model [Chat (Alibaba Math Model)](https://doc-en.302.ai/api-207705139.md): **Alibaba math model** - Large Language Model > Expert Model [Chat(Perplexity search)](https://doc-en.302.ai/api-214715975.md): Supported models: - Large Language Model > Other Models [Chat(grok-3)](https://doc-en.302.ai/api-263685475.md): The latest model trained by Elon Musk's xAI - Large Language Model > Other Models [Chat(grok-2)](https://doc-en.302.ai/api-224540600.md): The newest model developed by Elon Musk's xAI. - Large Language Model > Other Models [Chat(grok-2-vision)](https://doc-en.302.ai/api-246122225.md): The newest model developed by Elon Musk's xAI. - Large Language Model > Other Models [Chat(Nova)](https://doc-en.302.ai/api-242303358.md): The newest model developed by Amazon - Image Generation > DALL.E [Generations(DALL·E 3和DALL·E 2)](https://doc-en.302.ai/api-207705140.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Image Generation > DALL.E [Edits(DALL·E 2) ](https://doc-en.302.ai/api-207705142.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Image Generation > DALL.E [Variations(DALL·E 2) ](https://doc-en.302.ai/api-207705141.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Image Generation > Stability.ai [Text-to-image (Image Generation-V1)](https://doc-en.302.ai/api-207705147.md): **Image creation using AI** - Image Generation > Stability.ai [Generate (Image Generation-SD2)](https://doc-en.302.ai/api-207705143.md): **Image creation using AI** - Image Generation > Stability.ai [Generate (Image Generation-SD3-Ultra)](https://doc-en.302.ai/api-207705146.md): **Image creation using AI V3.1, using model sd3** - Image Generation > Stability.ai [Generate (Image Generation-SD3)](https://doc-en.302.ai/api-225945514.md): Image generation through AI V3, using model SD3 - Image Generation > Stability.ai [Generate(Image Generation-SD3.5-Large)](https://doc-en.302.ai/api-207705144.md): Image generation through AI V3.5, using model SD3.5 - Image Generation > Stability.ai [Generate(Image Generation-SD3.5-Medium)](https://doc-en.302.ai/api-230033800.md): Image generation through AI V3.5, using model SD3.5 - Image Generation > Stability.ai [Generate(Image to Image-SD3) ](https://doc-en.302.ai/api-225948593.md): Image generation through AI V3, using model SD3 - Image Generation > Stability.ai [Generate(Image to Image-SD3.5-Large) ](https://doc-en.302.ai/api-207705145.md): Image generation through AI V3.5, using model SD3.5 - Image Generation > Stability.ai [Generate(Image to Image-SD3.5-Medium)](https://doc-en.302.ai/api-230033801.md): Image generation through AI V3.5, using model SD3.5 - Image Generation > Midjourney [Imagine](https://doc-en.302.ai/api-207705151.md): **Price: 0.05 PTC / call** - Image Generation > Midjourney [Action](https://doc-en.302.ai/api-207705148.md): **Price: 0.05 PTC / call** - Image Generation > Midjourney [Blend](https://doc-en.302.ai/api-207705149.md): **Price: 0.05 PTC / call** - Image Generation > Midjourney [Describe](https://doc-en.302.ai/api-207705150.md): **Price: 0.025 PTC / call** - Image Generation > Midjourney [Modal](https://doc-en.302.ai/api-207705152.md): **Price: 0.05 PTC / call** - Image Generation > Midjourney [Fetch](https://doc-en.302.ai/api-207705153.md): **Price: 0 PTC / call** - Image Generation > Midjourney [Cancel](https://doc-en.302.ai/api-207705154.md): **Price: 0 PTC / call** - Image Generation > Midjourney-Relax [Imagine](https://doc-en.302.ai/api-235831759.md): **Price:0.02PTC/call** - Image Generation > Midjourney-Relax [Action](https://doc-en.302.ai/api-235831760.md): **Price:0.02PTC/call** - Image Generation > Midjourney-Relax [Blend](https://doc-en.302.ai/api-235831761.md): **Price:0.02PTC/call** - Image Generation > Midjourney-Relax [Describe](https://doc-en.302.ai/api-235831762.md): **Price:0.01PTC/call** - Image Generation > Midjourney-Relax [Modal](https://doc-en.302.ai/api-235831763.md): **Price:0.02PTC/call** - Image Generation > Midjourney-Relax [Fetch](https://doc-en.302.ai/api-235831764.md): **Price:0 PTC/call** - Image Generation > Midjourney-Relax [Cancel](https://doc-en.302.ai/api-235831765.md): **Price:0 PTC/call** - Image Generation > 302.AI [SDXL](https://doc-en.302.ai/api-207705155.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SDXL-Lora](https://doc-en.302.ai/api-207705158.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SDXL-Lightning](https://doc-en.302.ai/api-207705161.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SDXL-Lightning-V2](https://doc-en.302.ai/api-207705162.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SDXL-Lightning-V3](https://doc-en.302.ai/api-207705163.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SD3](https://doc-en.302.ai/api-207705159.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SD3-V2](https://doc-en.302.ai/api-207705160.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [Aura-Flow](https://doc-en.302.ai/api-207705156.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [Kolors](https://doc-en.302.ai/api-207705157.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [Kolors(Reference Image Generation-KLING)](https://doc-en.302.ai/api-247559341.md): 02.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [QRCode Generation](https://doc-en.302.ai/api-212293001.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [Lora](https://doc-en.302.ai/api-224673634.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SD-3.5-Large](https://doc-en.302.ai/api-230035306.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SD-3.5-Large-Turbo](https://doc-en.302.ai/api-230035307.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [SD-3.5-Medium](https://doc-en.302.ai/api-230035308.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > 302.AI [Lumina-Image-V2(Image generated)](https://doc-en.302.ai/api-259285282.md): 302.AI's API, comes from the model we deploy ourselves on cloud GPUs. Some of the models are open-source, and some of them are fine-tuned or developed by ourselves. - Image Generation > 302.AI [Playground-v25(Image generated)](https://doc-en.302.ai/api-267233314.md): Open-source image generation model Playground - Image Generation > 302.AI [Omnigen-V1(Image generated)](https://doc-en.302.ai/api-267354725.md): Open-source image generation model Omnigen - Image Generation > Glif [Glif(Claude+SD3)](https://doc-en.302.ai/api-207705168.md): Automatically optimize prompts through Claude, then use SD3 for drawing. - Image Generation > Glif [Glif (Text-to-Sticker)](https://doc-en.302.ai/api-207705164.md): **Input description, generate sticker image** - Image Generation > Glif [Glif (Text-to-Graffiti)](https://doc-en.302.ai/api-207705165.md): **Input description, generate doodle image** - Image Generation > Glif [Glif (Text-to-Wojak Comic)](https://doc-en.302.ai/api-207705166.md): **Input description, generate Wojak comic** - Image Generation > Glif [Glif (Text-to-Lego)](https://doc-en.302.ai/api-207705167.md): **Input description, generate Lego image** - Image Generation > Flux > Official API [Generate](https://doc-en.302.ai/api-256980713.md): Official documentation: https://api.bfl.ml/scalar#tag/tasks/POST/v1/flux-pro - Image Generation > Flux > Official API [Finetune](https://doc-en.302.ai/api-256980714.md): Official documentation: https://api.bfl.ml/scalar#tag/tasks/POST/v1/flux-pro - Image Generation > Flux > Official API [Result](https://doc-en.302.ai/api-256980715.md): **Price: Free** - Image Generation > Flux [Flux-Ultra(v1.1)](https://doc-en.302.ai/api-231457732.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Pro](https://doc-en.302.ai/api-207705169.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Pro(v1.1)](https://doc-en.302.ai/api-224673635.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Dev](https://doc-en.302.ai/api-207705170.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Schnell](https://doc-en.302.ai/api-207705172.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Realism](https://doc-en.302.ai/api-207705171.md): Created by black forest labs, founded by former Stability.ai members, for image generation, from: https://blackforestlabs.ai/ - Image Generation > Flux [Flux-Lora](https://doc-en.302.ai/api-207705173.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > Flux [Flux-General](https://doc-en.302.ai/api-232136668.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > Flux [Flux-General-Inpainting(Advanced Customization)](https://doc-en.302.ai/api-251739170.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Generation > Flux [Flux-Lora-Training(Training Lora)](https://doc-en.302.ai/api-252473800.md): LoRA for training your own image generation - Image Generation > Flux [Flux-Lora-Training(Fetch Results Asynchronously)](https://doc-en.302.ai/api-252474289.md): Fetch Lora training result - Image Generation > Ideogram [Generate(Text to Image)](https://doc-en.302.ai/api-212283108.md): Text-generated images from Ideogram, with the advantage of generating accurate text and posters - Image Generation > Recraft [Recraft-V3(Text to Image)](https://doc-en.302.ai/api-229153719.md): The latest enigmatic model raising eyebrows on the LLM big model list is "red_panda.",project from: https://www.recraft.ai/ - Image Generation > Recraft [Create-Style(Customized Styles)](https://doc-en.302.ai/api-229153720.md): The latest enigmatic model raising eyebrows on the LLM big model list is "red_panda.",project from: https://www.recraft.ai/ - Image Generation > Recraft [Recraft-20B(Image Generation)](https://doc-en.302.ai/api-245633638.md): The new version of Recraft is 40% cheaper than the original version, but it's not as effective as the original - Image Generation > Luma [Luma-Photon(Image generation)](https://doc-en.302.ai/api-240589098.md): Image generation model from Luma - Image Generation > Luma [Luma-Photon-Flash(Fast image generation)](https://doc-en.302.ai/api-240589099.md): Image generation model from Luma - Image Generation > Doubao [Drawing(Doubao image generation) ](https://doc-en.302.ai/api-246609554.md): Image generation model from Doubao - Image Generation > Google [Imagen-3 (Image generated)](https://doc-en.302.ai/api-259641438.md): Imagen-3 model from Google - Image Generation > Google [Imagen-3-Fast (Image generated)](https://doc-en.302.ai/api-259641439.md): Imagen-3-Fast model from Google - Image Generation > Minimax [image(Text-to-Image Generation)](https://doc-en.302.ai/api-270134792.md): A text-to-image model from Minimax, supported models: - Image Generation > ZHIPU [image(Text-to-Image Generation)](https://doc-en.302.ai/api-270884894.md): Image generation model from CogView-4, supported models: - Image Generation > Baidu [iRAG(Text-to-Image Generation)](https://doc-en.302.ai/api-279708373.md): This feature uses Baidu's image generation model iRAG, which retrieves real images before generating new ones. This helps reduce hallucinations and improves realism. - Image Generation > Hidream [Hidream-i1-full(Advanced Version)](https://doc-en.302.ai/api-283664126.md): Image Generation Model from Zhixiang Future - Image Generation > Hidream [Hidream-i1-dev(Intermediate Version)](https://doc-en.302.ai/api-283664440.md): Image Generation Model from Zhixiang Future - Image Generation > Hidream [Hidream-i1-fast(Entry-Level Version)](https://doc-en.302.ai/api-283664480.md): Image Generation Model from Zhixiang Future - Image Processing > 302.AI [Upscale](https://doc-en.302.ai/api-207705177.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Upscale-V2](https://doc-en.302.ai/api-207705178.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Upscale-V3](https://doc-en.302.ai/api-207705179.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Upscale-V4](https://doc-en.302.ai/api-207705180.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Super-Upscale](https://doc-en.302.ai/api-207705181.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Super-Upscale-V2](https://doc-en.302.ai/api-207705182.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Face-upscale](https://doc-en.302.ai/api-207705185.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Colorize](https://doc-en.302.ai/api-207705183.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Colorize-V2](https://doc-en.302.ai/api-207705184.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Removebg](https://doc-en.302.ai/api-207705186.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Removebg-V2](https://doc-en.302.ai/api-207705187.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Inpaint](https://doc-en.302.ai/api-207705189.md): Intelligently fill or replace specified areas of an image based on the content of the mask image - Image Processing > 302.AI [Erase](https://doc-en.302.ai/api-207705190.md): Intelligently replace specified areas of an image based on the content of the mask image - Image Processing > 302.AI [Face-to-many](https://doc-en.302.ai/api-207705193.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Llava](https://doc-en.302.ai/api-207705194.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Relight](https://doc-en.302.ai/api-207705188.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Relight-background](https://doc-en.302.ai/api-207705192.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Relight-V2](https://doc-en.302.ai/api-253670008.md): Secondary lighting, IC-Light, which stands for "Imposing Consistent Light," is a project dedicated to manipulating image illumination, project from: https://github.com/lllyasviel/IC-Light - Image Processing > 302.AI [Face-swap-V2](https://doc-en.302.ai/api-207705191.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Fetch](https://doc-en.302.ai/api-207705176.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [HtmltoPng](https://doc-en.302.ai/api-216464346.md): Convert HTML code into PNG format images. - Image Processing > 302.AI [SvgToPng](https://doc-en.302.ai/api-216464347.md): Convert SVG code into PNG format images. - Image Processing > 302.AI [image-translate](https://doc-en.302.ai/api-219451487.md): Translate the text in the image into the corresponding language and generate new text - Image Processing > 302.AI [image-translate-query](https://doc-en.302.ai/api-219451488.md): Translate the text in the image into the corresponding language and generate new text - Image Processing > 302.AI [image-translate-redo](https://doc-en.302.ai/api-219451489.md): Translate the text in the image into the corresponding language and generate new text - Image Processing > 302.AI [Flux-selfie](https://doc-en.302.ai/api-222605043.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Image Processing > 302.AI [Trellis(Image to 3D model) ](https://doc-en.302.ai/api-247568145.md): Image to 3D model from open source program - Image Processing > 302.AI [Pose-Transfer(Human Pose Transformation)](https://doc-en.302.ai/api-252488247.md): Human pose transformation from open source project - Image Processing > 302.AI [Pose-Transfer(Human Pose Transformation Result)](https://doc-en.302.ai/api-252488216.md): **Price:0 PTC/call** - Image Processing > 302.AI [Virtual-Tryon](https://doc-en.302.ai/api-252488177.md): Virtual-tryon from open source project - Image Processing > 302.AI [Virtual-Tryon(Fetch Result)](https://doc-en.302.ai/api-252488093.md): **Price:0 PTC/call** - Image Processing > 302.AI [Denoise(AI Denoising)](https://doc-en.302.ai/api-268505501.md): AI Denoising: Remove color noise from photos - Image Processing > 302.AI [Deblur(AI Deblurring)](https://doc-en.302.ai/api-268510191.md): AI Deblurring - Remove motion blur from photos - Image Processing > 302.AI-ComfyUI [Create Outfit Change Task](https://doc-en.302.ai/api-270667213.md): Outfit-changing effect achieved through a complex ComfyUI workflow. Commercial-grade quality, suitable for model outfit changes. Execution time: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create Outfit Change Task (Upload Mask)](https://doc-en.302.ai/api-282245359.md): Achieve outfit change effect through a complex workflow in ComfyUI, providing commercial-grade effects suitable for model outfit changes. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Query Outfit Change Task Status](https://doc-en.302.ai/api-270354432.md): Outfit-changing effect achieved through a complex ComfyUI workflow. Commercial-grade quality, suitable for model outfit changes. Execution time: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create Face Swap Task](https://doc-en.302.ai/api-270356436.md): Face swap effect achieved through complex ComfyUI workflow, commercial-grade quality, suitable for model face swapping, runtime 3-5 minutes - Image Processing > 302.AI-ComfyUI [Query Face Swap Task Status](https://doc-en.302.ai/api-270356569.md): Face swap effect achieved through a complex ComfyUI workflow, with commercial-grade quality, suitable for model face swapping. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create a Task to Replace Any Item](https://doc-en.302.ai/api-270356644.md): Achieve object replacement effects through a complex ComfyUI workflow, delivering commercial-grade results suitable for still-life advertisements. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create Object Replacement Task (Upload Mask)](https://doc-en.302.ai/api-282245422.md): Achieve object replacement effect through a complex workflow in ComfyUI, providing commercial-grade effects suitable for still life advertisements and other scenarios. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Check the Status of Any Object Replacement Task](https://doc-en.302.ai/api-270356665.md): The object replacement effect is achieved through a complex ComfyUI workflow, providing commercial-grade results suitable for still-life advertisements and similar scenarios. The runtime is approximately 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create a Task to Transform Cartoon Characters into Real People](https://doc-en.302.ai/api-272720420.md): A commercial-grade effect achieved through a complex ComfyUI workflow that transforms cartoons into realistic human images. Can convert clothing design drawings into images with real models. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Query the status of the task to turn a manga character into a real person](https://doc-en.302.ai/api-272722166.md): The manga-to-real-person effect is achieved through a complex workflow in ComfyUI, delivering commercial-grade results. It can convert clothing design sketches into real model images, with a runtime of 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Create Style Transfer Task](https://doc-en.302.ai/api-272722220.md): Image style transfer achieved through a complex workflow with ComfyUI, delivering commercial-grade results. Runtime: 3-5 minutes. - Image Processing > 302.AI-ComfyUI [Query the status of the style transfer task](https://doc-en.302.ai/api-272722249.md): Image style transfer implemented through a complex ComfyUI workflow, achieving commercial-grade results with a runtime of 3-5 minutes. - Image Processing > Vectorizer [Vectorize](https://doc-en.302.ai/api-207705195.md): You can convert ordinary images into infinitely scalable vector graphics using AI. 302.AI API here serves as a demonstration - Image Processing > Stability.ai [Fast Upscale ](https://doc-en.302.ai/api-222624673.md): Our fast zoom service leverages predictive and generative AI to enhance image resolution by up to 4x. This lightweight service, with a processing time of approximately 1 second, is ideal for improving the quality of compressed images, making them suitable for social media posts and other applications. - Image Processing > Stability.ai [Creative Upscale](https://doc-en.302.ai/api-207705196.md): This API can **enlarge images from 64x64 up to 1 million pixels to 4K resolution** with one click. It is characterized by enlarging images by approximately 20-40 times while maintaining the original image quality, and sometimes it can even enhance the image quality. It is best suited for handling images with severe quality loss and is not recommended for photos above 1 million pixels, as it will involve a lot of reimagining (controlled according to the creative ratio). - Image Processing > Stability.ai [Conservative Upscale](https://doc-en.302.ai/api-207705197.md): **Enlarge images from 64x64 pixels to 1 million pixels up to 4K resolution**. More broadly, it can scale images by approximately 20-40 times while preserving details in all aspects. Conservative enlargement minimizes changes to the image and should not be used for reimagining the image. - Image Processing > Stability.ai [Fetch](https://doc-en.302.ai/api-207705198.md): **Fetch task** - Image Processing > Stability.ai [Erase](https://doc-en.302.ai/api-207705205.md): Using image masking technology, unnecessary objects can be removed, such as blemishes in portraits or clutter on a desk. - Image Processing > Stability.ai [Inpaint](https://doc-en.302.ai/api-207705200.md): Intelligently fill or replace the specified area of the image based on the content of the mask image. - Image Processing > Stability.ai [Outpaint](https://doc-en.302.ai/api-207705201.md): Fill in additional content in various directions within the image. Compared to other automatic or manual methods, the Outpaint service minimizes flaws, making the signs of editing on the original image less noticeable. - Image Processing > Stability.ai [Search-and-replace](https://doc-en.302.ai/api-207705202.md): Search-and-replace is a special method of image editing that does not require masking. Users can identify the target to be replaced using simple language through search prompts. The service will automatically recognize and replace objects in the image with the target specified in the search prompt. - Image Processing > Stability.ai [Search-and-recolor](https://doc-en.302.ai/api-207705203.md): Search-and-recolor provides the ability to change the color of specific objects in an image using prompts. This service is a specific version of image retouching that does not require masking. The search and recolor service will automatically segment the objects and recolor them with the color specified in the prompt. - Image Processing > Stability.ai [Remove-background](https://doc-en.302.ai/api-207705204.md): Remove Background service accurately separates the foreground in an image and removes the background. - Image Processing > Stability.ai [Sketch](https://doc-en.302.ai/api-207705206.md): Sketch service offers a perfect solution for design projects that require frequent brainstorming and iteration. It can transform rough hand-drawn sketches into refined outputs, allowing for precise control. For non-sketch images, the service can also utilize the outlines and edges within the image to perform detailed visual adjustments. - Image Processing > Stability.ai [Structure](https://doc-en.302.ai/api-207705207.md): Structure service is capable of generating new images while maintaining the original image structure, making it particularly important in advanced content creation areas such as recreating specific scenes or rendering characters based on models. - Image Processing > Stability.ai [Style](https://doc-en.302.ai/api-207705208.md): Style elements are extracted from the input image (control image) and used to guide the creation of the output image based on prompts. The result is a new image that shares the same style as the control image. - Image Processing > Stability.ai [Replace-Background](https://doc-en.302.ai/api-238677548.md): Replace the image background and readjust the lighting. - Image Processing > Stability.ai [Stable-Fast-3D](https://doc-en.302.ai/api-216465572.md): Convert images to 3D models quickly. - Image Processing > Stability.ai [Stable-Point-3D(Image to 3D Model Conversion -New Version)](https://doc-en.302.ai/api-253057122.md): Converting an image to a 3D model using the point cloud method. Introduction:https://stability.ai/news/stable-point-aware-3d - Image Processing > Glif [Glif(Portrait Photo Stylization)](https://doc-en.302.ai/api-207705209.md): Upload a portrait photo and select a style filter for generation. - Image Processing > Glif [Glif(Photo-to-Sculpture)](https://doc-en.302.ai/api-207705211.md): Upload a photo and convert it into a sculpture. - Image Processing > Glif [Glif(Photo Pixelation)](https://doc-en.302.ai/api-207705212.md): Upload a photo and convert it into a Pixelation. - Image Processing > Glif [Glif(Logo Materialization)](https://doc-en.302.ai/api-207705210.md): Upload a logo image, select the desired material, and transform the logo. Example material: diamond - Image Processing > Glif [Glif(Image-to-GIF)](https://doc-en.302.ai/api-207705213.md): Upload a photo, animate the image using AI, and generate a GIF. - Image Processing > Clipdrop [Cleanup](https://doc-en.302.ai/api-207705214.md): Clipdrop is a company that provides AI image editing services, allowing for quick and easy modifications to images. We are fully aligned with their official API, and you only need to **replace the API Base URL to use it**. - Image Processing > Clipdrop [Upscale](https://doc-en.302.ai/api-207705215.md): Clipdrop is a company that provides AI image editing services, allowing for quick and easy modifications to images. We are fully aligned with their official API, and you only need to replace the API Base URL to use it. - Image Processing > Clipdrop [Remove-background](https://doc-en.302.ai/api-207705216.md): Clipdrop is a company that provides AI image editing services, allowing for quick and easy modifications to images. We are fully aligned with their official API, and you only need to replace the API Base URL to use it. - Image Processing > Clipdrop [Uncrop](https://doc-en.302.ai/api-207705217.md): Clipdrop is a company that provides AI image editing services, allowing for quick and easy modifications to images. We are fully aligned with their official API, and you only need to replace the API Base URL to use it. - Image Processing > Recraft [Vectorize Image](https://doc-en.302.ai/api-231458786.md): The latest mysterious model raising eyebrows on the LLM Big Model list is Red_Panda.,from: https://www.recraft.ai/ - Image Processing > Recraft [Remove Background](https://doc-en.302.ai/api-231458787.md): The latest mysterious model raising eyebrows on the LLM Big Model list is Red_Panda.,from: https://www.recraft.ai/ - Image Processing > Recraft [Clarity Upscale](https://doc-en.302.ai/api-231458788.md): The latest mysterious model raising eyebrows on the LLM Big Model list is Red_Panda.,from: https://www.recraft.ai/ - Image Processing > Recraft [Generative Upscale](https://doc-en.302.ai/api-231458789.md): The latest mysterious model raising eyebrows on the LLM Big Model list is Red_Panda.,from: https://www.recraft.ai/ - Image Processing > BRIA [Remove Background](https://doc-en.302.ai/api-235057906.md): Remove the background of an image,from: https://bria.ai/ - Image Processing > BRIA [Blur Background](https://doc-en.302.ai/api-235057907.md): Blur the image background,from: https://bria.ai/ - Image Processing > BRIA [Generate Background](https://doc-en.302.ai/api-235057904.md): Select the image subject and regenerate the background,from: https://bria.ai/ - Image Processing > BRIA [Erase Foreground](https://doc-en.302.ai/api-235057894.md): Erase the image foreground, leaving only the background,from: https://bria.ai/ - Image Processing > BRIA [Eraser](https://doc-en.302.ai/api-235057905.md): Erase the selected part of the image,from: https://bria.ai/ - Image Processing > BRIA [Expand Image](https://doc-en.302.ai/api-235057903.md): Extend the image boundaries using AI to imagine the rest,from: https://bria.ai/ - Image Processing > BRIA [Increase Resolution](https://doc-en.302.ai/api-235057902.md): Increase the image resolution,from: https://bria.ai/ - Image Processing > BRIA [Crop](https://doc-en.302.ai/api-235057895.md): Automatically crop the subject part of image,from: https://bria.ai/ - Image Processing > BRIA [Cutout](https://doc-en.302.ai/api-235057897.md): Remove the background of the product image and crop it out,from: https://bria.ai/ - Image Processing > BRIA [Packshot](https://doc-en.302.ai/api-235057898.md): Convert the product image into a close-up,from: https://bria.ai/ - Image Processing > BRIA [Shadow](https://doc-en.302.ai/api-235057899.md): Generate a shadow for the product image,from: https://bria.ai/ - Image Processing > BRIA [Scene](https://doc-en.302.ai/api-235057900.md): Generate a scene for the product image,from: https://bria.ai/ - Image Processing > BRIA [Caption](https://doc-en.302.ai/api-235057901.md): Obtain image description,from: https://bria.ai/ - Image Processing > BRIA [Register](https://doc-en.302.ai/api-235057896.md): Upload the image for further editing,from: https://bria.ai/ - Image Processing > BRIA [Mask](https://doc-en.302.ai/api-236117381.md): Split the image into different sections and generate a compressed package,from: https://bria.ai/ - Image Processing > BRIA [Presenter info](https://doc-en.302.ai/api-236117275.md): Analyze facial information,from: https://bria.ai/ - Image Processing > BRIA [Modify Presenter](https://doc-en.302.ai/api-236117179.md): Edit facial details,from: https://bria.ai/ - Image Processing > BRIA [Delayer Image](https://doc-en.302.ai/api-236117090.md): Convert image to multi-layer PSD,from: https://bria.ai/ - Image Processing > Flux [Flux-V1.1-Ultra-Redux(Image-to-image generation-Ultra)](https://doc-en.302.ai/api-236383171.md): Given an input image, FLUX.1 Redux can reproduce images with slight variations, allowing for the refinement of the given image. - Image Processing > Flux [Flux-V1.1-Pro-Redux(Image-to-image generation-Pro)](https://doc-en.302.ai/api-236383172.md): Given an input image, FLUX.1 Redux can reproduce images with slight variations, allowing for the refinement of the given image. - Image Processing > Flux [Flux-Dev-Redux(Image-to-image generation-Dev)](https://doc-en.302.ai/api-236383176.md): Given an input image, FLUX.1 Redux can reproduce images with slight variations, allowing for the refinement of the given image. - Image Processing > Flux [Flux-Schnell-Redux(Image-to-image generation-Schnell)](https://doc-en.302.ai/api-236383177.md): Given an input image, FLUX.1 Redux can reproduce images with slight variations, allowing for the refinement of the given image. - Image Processing > Flux [Flux-V1-Pro-Canny(Object consistency)](https://doc-en.302.ai/api-236383173.md): Structural adjustment uses intelligent edges to maintain precise control during image transformation. By preserving the structure of the original image through edge maps, users can perform text-guided edits while keeping the core composition intact. This is particularly effective for retexturing images. - Image Processing > Flux [Flux-V1-Pro-Depth(Depth consistency)](https://doc-en.302.ai/api-236383174.md): Structural adjustment uses depth detection to maintain precise control during image transformation. By preserving the structure of the original image through depth maps, users can perform text-guided edits while keeping the core composition intact. This is particularly effective for retexturing images. - Image Processing > Flux [Flux-V1-Pro-Fill(Partial repainting)](https://doc-en.302.ai/api-236383175.md): Structural adjustment uses depth detection to maintain precise control during image transformation. By preserving the structure of the original image through depth maps, users can perform text-guided edits while maintaining the integrity of the core composition. This is particularly effective for retexturing images. - Image Processing > Hyper3D [Hyper3d-Rodin(Generate 3D models)](https://doc-en.302.ai/api-240592861.md): Image-to-3D model generation from Hyper3D, capable of creating ultra-detailed 3D models. - Image Processing > Hyper3D [Hyper3d-Rodin(Obtain task results)](https://doc-en.302.ai/api-242657673.md): Video generation model from Tencent Open Source - Image Processing > Tripo3D [Task(Task Submission)](https://doc-en.302.ai/api-243969346.md): For specific usage, please refer to the official documentation: - Image Processing > Tripo3D [Upload(Image Upload)](https://doc-en.302.ai/api-243969345.md): **Price:0 PTC/call** - Image Processing > Tripo3D [Fetch](https://doc-en.302.ai/api-243969347.md): **Price:0 PTC/call** - Image Processing > FASHN [Fashn-Tryon(Virtual Try-On)](https://doc-en.302.ai/api-244116847.md): Virtual Try-On from FASHN - Image Processing > Ideogram [Remix(Image to Image)](https://doc-en.302.ai/api-245824806.md): Image-to-image generation from Ideogram, with its key advantage being the ability to generate accurate text and posters. - Image Processing > Ideogram [Upscale(Image Upscaling)](https://doc-en.302.ai/api-245824807.md): Double the size of the Ideogram image - Image Processing > Ideogram [Describe(Image Description)](https://doc-en.302.ai/api-245824808.md): Image description from Ideogram - Image Processing > Ideogram [Edit(Image Edition)](https://doc-en.302.ai/api-245824809.md): Image-to-image generation from Ideogram, with its key advantage being the ability to generate accurate text and posters. - Image Processing > Doubao [SeedEdit(Image Command Editing) ](https://doc-en.302.ai/api-246593603.md): Image Editing Model from Doubao - Image Processing > Doubao [Character(Character Feature Preservation) ](https://doc-en.302.ai/api-246593604.md): Image generation model from SeedEdit - Image Processing > Kling [Virtual-Try-On](https://doc-en.302.ai/api-251317721.md): Virtual Try-On by Kling - Image Processing > Kling [Fetch(Get Task Result) ](https://doc-en.302.ai/api-251317722.md): Virtual Try-On by Kling - Video Generation > Unified Interface [Create Video Generation Task](https://doc-en.302.ai/api-275315909.md): This interface is designed to integrate the generation function interfaces provided by various deployment parties, extracting commonly used parameter fields and response fields, standardizing field naming and parameter/response data formats, and improving efficiency. - Video Generation > Unified Interface [Retrieve Video Task Information](https://doc-en.302.ai/api-275315946.md): This interface is primarily responsible for retrieving the generated video, storing it in the 302 file system, and returning a public link as a response. For additional information, you can query the data field. - Video Generation > 302.AI [Image-to-video ](https://doc-en.302.ai/api-207705233.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Video Generation > 302.AI [Live-portrait](https://doc-en.302.ai/api-207705234.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Video Generation > 302.AI [Video-To-Video](https://doc-en.302.ai/api-230622642.md): Transforming a video into another style with cue words - Video Generation > 302.AI [Fetch](https://doc-en.302.ai/api-207705232.md): 302.AI's API is derived from models we've deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Video Generation > 302.AI [Latentsync (Open source digital person)](https://doc-en.302.ai/api-258261826.md): ByteDance's open source digital person realizes lip-synchronization of video and voice. - Video Generation > 302.AI [Latentsync (get task results)](https://doc-en.302.ai/api-258261827.md): ByteDance's open source digital person realizes lip-synchronization of video and voice - Video Generation > 302.AI [Video-Face-Swap(Video face swap)](https://doc-en.302.ai/api-269307143.md): Our self-deployed video face-swapping API - Video Generation > 302.AI [Video-Face-Swap(Get task result)](https://doc-en.302.ai/api-269307513.md): Our self-deployed video face-swapping API - Video Generation > Stable Diffusion [Image-to-video](https://doc-en.302.ai/api-207705235.md): Generate a short video based on an initial image using Stable Video Diffusion. - Video Generation > Stable Diffusion [Fetch Image-to-video](https://doc-en.302.ai/api-207705236.md): Generate a short video based on an initial image using Stable Video Diffusion. - Video Generation > Luma AI [Submit(Text / Image to Video)](https://doc-en.302.ai/api-207705237.md): Text/Image-to-Video generation from Luma AI. Generate a 5-second video based on text or images. - Video Generation > Luma AI [Extend(Video)](https://doc-en.302.ai/api-207705238.md): Extend(Video) from Luma AI. This API is used to extend previously generated videos, adding 5s of playback time. - Video Generation > Luma AI [Fetch](https://doc-en.302.ai/api-207705239.md): Luma AI:https://lumalabs.ai/dream-machine - Video Generation > Runway [Submit(Text to Video)](https://doc-en.302.ai/api-207705240.md): **Text-to-video from Runway Gen-3, creating a 10-second video based on the text**. - Video Generation > Runway [Submit(Image to Video)](https://doc-en.302.ai/api-207705241.md): **Text-to-video from Runway Gen-3, creating a 10-second video based on the image**. - Video Generation > Runway [Submit(Image to Video Rapid)](https://doc-en.302.ai/api-207705242.md): **Runway Gen-3 Turbo**, updated on August 15th, allows for **rapid video generation from images**, but currently does not support text-to-video generation. - Video Generation > Runway [Submit(Image-to-Video Generation with Gen4)](https://doc-en.302.ai/api-279710322.md): The updated Runway Gen4 (as of April 1) allows fast video generation from images. Text-to-video is not supported at the moment. - Video Generation > Runway [Submit(Image to Video Generation Gen4-Turbo)](https://doc-en.302.ai/api-282193420.md): Runway Gen4 Turbo updated on April 1st, allows quick image-to-video generation, currently does not support text-to-video generation. - Video Generation > Runway [Submit(Video to Video)](https://doc-en.302.ai/api-223500418.md): The raw video from Runway Gen-3 transforms the original footage into various styles based on the video content and cue words. - Video Generation > Runway [Submit(Video to Video Rapid)](https://doc-en.302.ai/api-224541525.md): **Runway Gen-3 Turbo**, updated on August 15th, allows for **rapid video generation from images**, but currently does not support text-to-video generation. - Video Generation > Runway [Submit(Act-one motion capture)](https://doc-en.302.ai/api-230623447.md): Runway introduces Act-One, an advanced generative motion capture tool that rapidly transforms videos into stylized images while preserving character movements and key visual elements. This innovative technology allows users to seamlessly switch between multiple artistic styles, adapting the input video to match the aesthetic of a chosen reference image. The processing time for this transformation is approximately 10-20 minutes. - Video Generation > Runway [Submit(Video extension) ](https://doc-en.302.ai/api-242641700.md): Expand a landscape video to portrait or expand a portrait video to landscape - Video Generation > Runway [Fetch](https://doc-en.302.ai/api-207705243.md): **Runway Gen-3, creating a 10-second video based on the text/image**. - Video Generation > Kling [Txt2Video(Text to Video 1.0 Rapid-5s)](https://doc-en.302.ai/api-207705249.md): Text-to-video generated by Kling, a quick 5-second version. - Video Generation > Kling [Txt2Video_HQ(Text to Video 1.5 HQ-5s)](https://doc-en.302.ai/api-207705248.md): Text-to-video generated by Kling, **a High-quality 5-second version**. - Video Generation > Kling [Txt2Video_HQ(Text to Video 1.5 HQ-10s)](https://doc-en.302.ai/api-207705245.md): Text-to-video generated by Kling, **a High-quality 10-second version**. - Video Generation > Kling [Image2Video(Image to Video 1.0 Rapid-5s)](https://doc-en.302.ai/api-207705247.md): **Image-to-video** generated by Kling, **a quick 5-second version**. - Video Generation > Kling [Image2Video(Image to Video 1.0 Rapid-10s)](https://doc-en.302.ai/api-222607629.md): **Image-to-video** generated by Kling, **a quick 10-second version**. - Video Generation > Kling [Image2Video(Image to Video 1.5 Rapid-5s) ](https://doc-en.302.ai/api-238109409.md): **Image-to-video** generated by Kling, **a quick 5-second version**. - Video Generation > Kling [Image2Video(Image to Video 1.5 Rapid-10s) ](https://doc-en.302.ai/api-238109410.md): **Image-to-video** generated by Kling, **a quick 10-second version**. - Video Generation > Kling [Image2Video_HQ(Image to Video 1.5 HQ-5s)](https://doc-en.302.ai/api-207705246.md): **The Tucson video from Kling, in the 5-second HD format, is automatically upgraded to Kling-1.5.** - Video Generation > Kling [Image2Video_HQ(Image to Video 1.5 HQ-10s)](https://doc-en.302.ai/api-222607628.md): The Tucson video from Kling, in the 10-second HD format, is automatically upgraded to Kling-1.5. - Video Generation > Kling [Txt2Video(Text to Video 1.6 Standard-5s) ](https://doc-en.302.ai/api-246827254.md): Text-to-video generated by Kling, a standard 5-second version. - Video Generation > Kling [Txt2Video(Text to Video 1.6 Standard-10s) ](https://doc-en.302.ai/api-246824267.md): Text-to-video generated by Kling, a standard 10-second version. - Video Generation > Kling [Txt2Video(Text to Video 1.6 HQ-5s) ](https://doc-en.302.ai/api-246824453.md): Text-to-video generated by Kling, a High-quality 5-second version. - Video Generation > Kling [Txt2Video(Text to Video 1.6 HQ-10s) ](https://doc-en.302.ai/api-246824516.md): Text-to-video generated by Kling, a High-quality 10-second version. - Video Generation > Kling [Image2Video(Image to Video 1.6 Standard-5s)](https://doc-en.302.ai/api-246824589.md): Image-to-video generated by Kling, a standard 5-second version. - Video Generation > Kling [Image2Video(Image to Video 1.6 Standard-10s)](https://doc-en.302.ai/api-246824622.md): Image-to-video generated by Kling, a standard 10-second version. - Video Generation > Kling [Image2Video(Image to Video 1.6 HQ-5s)](https://doc-en.302.ai/api-246824653.md): Image-to-video generated by Kling, a HQ 5-second version. - Video Generation > Kling [Image2Video(Image to Video 1.6 HQ-10s)](https://doc-en.302.ai/api-246824698.md): Image-to-video generated by Kling, a HQ 10-second version. - Video Generation > Kling [Txt2Video(Text-to-Video 2.0 – HD – 5s) ](https://doc-en.302.ai/api-284770440.md): This is the 5-second HD version of Kling’s Text-to-Video 2.0. - Video Generation > Kling [Image2Video(Image-to-Video 2.0 – HD – 5s)](https://doc-en.302.ai/api-284770464.md): - Video Generation > Kling [Image2Video(Image-to-Video 2.0 – HD – 10s)](https://doc-en.302.ai/api-284770552.md): This is the 5-second HD version of Kling’s Image-to-Video 2.0. - Video Generation > Kling [Image2Video (Multiple pictures for reference)](https://doc-en.302.ai/api-256988405.md): Multi-picture reference from Kling, only 4 pictures can be uploaded at most - Video Generation > Kling [Extend_Video](https://doc-en.302.ai/api-223198280.md): The extended video from Kling allows for 5-second extensions, but the HD version does not support this feature. - Video Generation > Kling [Fetch](https://doc-en.302.ai/api-207705244.md): **Image-to-video and Text-to-video generated by Kling.** - Video Generation > CogVideoX [Generations (text-generated video)](https://doc-en.302.ai/api-261643423.md): Text-generated video model from Zhipu - Video Generation > CogVideoX [Generations(Image-generated video)](https://doc-en.302.ai/api-261643424.md): Image-generated video model from Zhipu - Video Generation > CogVideoX [Results (get task results)](https://doc-en.302.ai/api-261643425.md): Text-generated video model from Zhipu - Video Generation > Minimax [Video(Text-to-Video)](https://doc-en.302.ai/api-267624009.md): Video generation model from Minimax, supported models: - Video Generation > Minimax [Video(Image-to-video)](https://doc-en.302.ai/api-222607630.md): Image-to-video from from Minimax - Video Generation > Minimax [Video(Based on Subject Reference)](https://doc-en.302.ai/api-256735121.md): **Specification** - Video Generation > Minimax [Video(Camera movement control)](https://doc-en.302.ai/api-212308281.md): When the parameter "model" is set to "T2V-01-Director" or "I2V-01-Director," there is a more accurate response to camera movement control in the prompt. - Video Generation > Minimax [Query(Result)](https://doc-en.302.ai/api-212308282.md): Text-to-video from from Minimax - Video Generation > Minimax [Files(Video Download)](https://doc-en.302.ai/api-212308283.md): Text-to-video from from Minimax - Video Generation > Pika [1.5 pikaffects(Image-to-Video Generation)](https://doc-en.302.ai/api-285339580.md): Powered by Pika's video generation model - Video Generation > Pika [Turbo Generate(Text-to-Video Generation)](https://doc-en.302.ai/api-285339686.md): Powered by Pika's video generation model. - Video Generation > Pika [Turbo Generate(Text-to-Video Generation)](https://doc-en.302.ai/api-285339795.md): Powered by Pika's video generation model - Video Generation > Pika [2.1 Generate(Text-to-Video Generation)](https://doc-en.302.ai/api-285339947.md): Powered by Pika's video generation model - Video Generation > Pika [2.1 Generate(Image-to-Video Generation) ](https://doc-en.302.ai/api-285340084.md): Powered by Pika's video generation model - Video Generation > Pika [2.2 Generate(Text-to-Video Generation)](https://doc-en.302.ai/api-285340208.md): Powered by Pika's video generation model - Video Generation > Pika [2.2 Generate(Image-to-Video Generation) ](https://doc-en.302.ai/api-285340561.md): Powered by Pika's video generation model. - Video Generation > Pika [2.2 Pikascenes(Generate scene videos) ](https://doc-en.302.ai/api-285341167.md): Powered by Pika's video generation model - Video Generation > Pika [Fetch(Result)](https://doc-en.302.ai/api-225990814.md): Get the video generation task results - Video Generation > PixVerse [Generate](https://doc-en.302.ai/api-236444989.md): A video generation model from PixVerse,creates videos from images, v4 supported - Video Generation > PixVerse [Fetch](https://doc-en.302.ai/api-236444990.md): Retrieve the results of the video generation task. - Video Generation > Genmo [Mochi-v1 (Get task results)](https://doc-en.302.ai/api-256081480.md): Video generation model from Tencent open source - Video Generation > Genmo [Mochi-v1(Text to Video)](https://doc-en.302.ai/api-226300102.md): Genmo has set a new benchmark in video generation by open-sourcing Mochi1, their latest model. Featuring an innovative Asymmetric Diffusion Transformer (AsymmDiT) architecture and up to 10 billion parameters, Mochi1 stands as the largest publicly released video generation model to date. - Video Generation > Hedra [Audio(Upload)](https://doc-en.302.ai/api-226300103.md): Audio upload interface from Hedra. - Video Generation > Hedra [Portrait(Upload) ](https://doc-en.302.ai/api-226300104.md): Image upload interface from Hedra. - Video Generation > Hedra [Characters(lip-synthesis)](https://doc-en.302.ai/api-226300106.md): A lip-synthesis interface from Hedra. - Video Generation > Hedra [Fetch(Result)](https://doc-en.302.ai/api-226300105.md): Get video generation task results - Video Generation > Haiper [Haiper(Text to Video)](https://doc-en.302.ai/api-232137062.md): A rapidly growing video generation company based in London - Video Generation > Haiper [Haiper(Image to Video)](https://doc-en.302.ai/api-232137063.md): A rapidly growing video generation company based in London - Video Generation > Haiper [Haiper(Text to Video V2.5)](https://doc-en.302.ai/api-263666434.md): An emerging video generation company from London - Video Generation > Haiper [Haiper(Image to Video V2.5)](https://doc-en.302.ai/api-263666435.md): An emerging video generation company from London - Video Generation > Haiper [Haiper(Fetch Task Result)](https://doc-en.302.ai/api-252700108.md): A video generation model open-sourced by Tencent - Video Generation > Sync. [Generate](https://doc-en.302.ai/api-234772368.md): Input a video/audio,match the lip shape. - Video Generation > Sync. [Fetch](https://doc-en.302.ai/api-234772369.md): Obtain the results of the video generation task - Video Generation > Lightricks [Ltx-Video](https://doc-en.302.ai/api-237132298.md): Open-source video model, characterized by extremely fast generation speed. - Video Generation > Lightricks [Ltx-Video-I2V](https://doc-en.302.ai/api-237132299.md): Open-source video model, characterized by extremely fast generation speed. - Video Generation > Lightricks [Ltx-Video-v095(Text-to-video generation)](https://doc-en.302.ai/api-268482790.md): Open-source video model characterized by fast generation speed. - Video Generation > Lightricks [Ltx-Video-v095-I2V(Image-to-Video Generation)](https://doc-en.302.ai/api-268489487.md): Open-source video model characterized by fast generation speed. - Video Generation > Hunyuan [Hunyuan(Text-to-Video)](https://doc-en.302.ai/api-241433337.md): Video generation model from Tencent Open Source - Video Generation > Hunyuan [Hunyuan(Obtain Task Results) ](https://doc-en.302.ai/api-241433338.md): Video generation model from Tencent Open Source - Video Generation > Vidu [Vidu(Text-to-Video) ](https://doc-en.302.ai/api-244703992.md): A rising video generation company from China, whose latest 1.5 model features an exclusive subject generation capability. - Video Generation > Vidu [Vidu(Image to Video) ](https://doc-en.302.ai/api-244703993.md): A new emerging video generation company in China has introduced its latest 1.5 model, featuring an exclusive subject generation capability - Video Generation > Vidu [Vidu(Generate video from the first and last frames) ](https://doc-en.302.ai/api-244703994.md): A new emerging video generation company in China has introduced its latest 1.5 model, featuring an exclusive subject generation capability - Video Generation > Vidu [Vidu(Reference-based video generation) ](https://doc-en.302.ai/api-244703995.md): A rising video generation company from China, whose latest 1.5 model features an exclusive subject generation capability. - Video Generation > Vidu [Vidu(Generate scene video) ](https://doc-en.302.ai/api-244703996.md): A rising video generation company from China. - Video Generation > Vidu [Vidu(Smart Ultra HD) ](https://doc-en.302.ai/api-244703997.md): A rising video generation company from China, whose latest 1.5 model features an exclusive subject generation capability. - Video Generation > Vidu [Fetch(Retrieve Task Results)](https://doc-en.302.ai/api-244703998.md): Retrieve Video Generation Task Results - Video Generation > Tongyi Wanxiang [T2V(Text-to-Video)](https://doc-en.302.ai/api-254095110.md): Support model: - Video Generation > Tongyi Wanxiang [Tasks(Fetch Task Result) ](https://doc-en.302.ai/api-254095111.md): **Price:0 PTC/call** - Video Generation > Tongyi Wanxiang [wan-t2v(Text-to-video open source version)](https://doc-en.302.ai/api-265218437.md): The latest open-source video generation model from Alibaba - Video Generation > Tongyi Wanxiang [wan-t2v(Fetch Task Result) ](https://doc-en.302.ai/api-265218438.md): The latest open-source video generation model from Alibaba - Video Generation > Tongyi Wanxiang [wan-i2v(Image-to-video open source version)](https://doc-en.302.ai/api-265218439.md): The latest open-source video generation model from Alibaba - Video Generation > Tongyi Wanxiang [wan-i2v(Fetch Task Result) ](https://doc-en.302.ai/api-265218440.md): The latest open-source video generation model from Alibaba - Video Generation > Jimeng [Seaweed (Text/picture generated video)](https://doc-en.302.ai/api-256341887.md): Video generation model from Jimeng, supports Wensheng video and Tusheng video (Tusheng video only supports up to 5s) - Video Generation > Jimeng [Seaweed (Fetch Task Results)](https://doc-en.302.ai/api-256341888.md): **Price: 0 PTC/call** - Video Generation > SiliconFlow [LTX-Video(Video Generation) ](https://doc-en.302.ai/api-255033894.md): LTX-Video is the first real-time video generation model based on the DiT architecture, capable of producing high-quality videos at a speed even faster than video playback. The model supports video generation at 24 frames per second with a resolution of 768x512. It can generate videos from text as well as convert images combined with text into videos. Trained on a large-scale, diverse video dataset, the model can generate high-resolution, realistic, and content-rich videos. - Video Generation > SiliconFlow [HunyuanVideo(Video Generation) ](https://doc-en.302.ai/api-255033895.md): HunyuanVideo is an open-source video generation foundational model launched by Tencent, boasting over 13 billion parameters, making it the largest open-source video generation model to date. The model employs a unified architecture for image and video generation, integrating key technologies such as data curation, joint image-video model training, and efficient infrastructure. It uses a multimodal large language model as the text encoder, performs spatial-temporal compression through 3D VAE, and offers a prompt rewriting feature. According to professional human evaluations, HunyuanVideo outperforms existing state-of-the-art models in terms of text alignment, motion quality, and visual quality. - Video Generation > SiliconFlow [Mochi-1-Preview(Video Generation) ](https://doc-en.302.ai/api-255033896.md): Mochi 1 is an open-source video generation model built on the novel AsymmDiT architecture. The model features 10 billion parameters and utilizes an asymmetric encoder-decoder structure, capable of compressing videos to 1/128th of their original size, with 8x8 spatial compression and 6x temporal compression. In preliminary evaluations, the model has demonstrated high-fidelity motion effects and strong prompt adherence capabilities. - Video Generation > SiliconFlow [Tasks(Fetch Task Result) ](https://doc-en.302.ai/api-255033897.md): Fetch Task Result - Video Generation > Google [Veo2(Text-to-video)](https://doc-en.302.ai/api-263672348.md): The latest video generation model from Google - Video Generation > Google [Veo2(Get task results) ](https://doc-en.302.ai/api-263672349.md): The latest video generation model from Google - Video Generation > Kunlun Tech [Skyreels(Image to Video)](https://doc-en.302.ai/api-263672450.md): The latest video generation model from Kunlun Tech - Video Generation > Kunlun Tech [Skyreels(Get task results) ](https://doc-en.302.ai/api-263672451.md): The latest video generation model from Kunlun Tech - Audio/Video Processing > 302.AI [Stable-Audio(instrumental generation)](https://doc-en.302.ai/api-219194446.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Audio/Video Processing > 302.AI [Transcript (Audio/Video to Text)](https://doc-en.302.ai/api-207705230.md): Automatically extract speech from video or audio and convert it into text subtitles. - Audio/Video Processing > 302.AI [Transcriptions(Speech to Text)](https://doc-en.302.ai/api-207705229.md): Transcribe audio into the input language. - Audio/Video Processing > 302.AI [Alignments(Subtitle Timing) ](https://doc-en.302.ai/api-243529129.md): - Audio/Video Processing > 302.AI [WhisperX](https://doc-en.302.ai/api-238598485.md): "Open-source version"WhsiperX - Audio/Video Processing > 302.AI [F5-TTS(Text to Speech)](https://doc-en.302.ai/api-225254060.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Audio/Video Processing > 302.AI [F5-TTS (Asynchronous Text-to-Speech)](https://doc-en.302.ai/api-244081300.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Audio/Video Processing > 302.AI [F5-TTS (Asynchronously Retrieve Results)](https://doc-en.302.ai/api-244081628.md): 302.AI's API comes from models we deployed on cloud GPUs. Some models are open-source, while others are fine-tuned or developed by 302.AI. - Audio/Video Processing > 302.AI [mmaudio(Text-to-Speech)](https://doc-en.302.ai/api-248597785.md): AI video dubbing with text input, voice generation, and synchronized API - Audio/Video Processing > 302.AI [mmaudio(AI Video Voiceover)](https://doc-en.302.ai/api-245622362.md): AI Video Voiceover - Audio/Video Processing > 302.AI [mmaudio (Asynchronous Result Retrieval)](https://doc-en.302.ai/api-245622558.md): AI Video Voiceover - Audio/Video Processing > 302.AI [Diffrhythm(Song Generation)](https://doc-en.302.ai/api-268457593.md): Open-source song generation requires inputting reference music and lyrics to generate music. The maximum length generated is 1 minute and 35 seconds. Currently, the quality of English songs is better than that of Chinese songs. - Audio/Video Processing > OpenAI [Speech(Text to Speech tts-1)](https://doc-en.302.ai/api-207705220.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Audio/Video Processing > OpenAI [Transcriptions(Speech to Text whisper-1)](https://doc-en.302.ai/api-207705218.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Audio/Video Processing > OpenAI [Translations(Speech to English Text whisper-1)](https://doc-en.302.ai/api-207705219.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - Audio/Video Processing > OpenAI [Realtime](https://doc-en.302.ai/api-222610017.md): [OpenAI Guide](https://platform.openai.com/docs/guides/realtime/quickstart) - Audio/Video Processing > Azure [AzureTTS(Text to Speech)](https://doc-en.302.ai/api-207705221.md): **Text-to-Speech service provided by Microsoft Azure** - Audio/Video Processing > Azure [Voice-List](https://doc-en.302.ai/api-207705222.md): **Text-to-Speech service provided by Microsoft Azure** - Audio/Video Processing > Suno [Music(Automatic Mode)](https://doc-en.302.ai/api-207705226.md): **Enter a keyword, automatically generate a song** - Audio/Video Processing > Suno [Music(Custom Mode)](https://doc-en.302.ai/api-207705223.md): **Generate a song by customizing settings such as lyrics and style.** - Audio/Video Processing > Suno [Music(Generate Lyrics)](https://doc-en.302.ai/api-207705227.md): **Enter a keyword, automatically generate lyrics.** - Audio/Video Processing > Suno [Music(Song Continuation)](https://doc-en.302.ai/api-207705224.md): Based on the previously generated song, continue writing a new song. You can set the continuation duration, with each continuation fixed at 2 minutes. - Audio/Video Processing > Suno [Fetch](https://doc-en.302.ai/api-207705225.md): Query the song generation status. - Audio/Video Processing > Doubao [tts_hd(Text to Speech)](https://doc-en.302.ai/api-207705231.md): **Text-to-Speech API from Doubao** - Audio/Video Processing > Doubao [vc-ata(Automatic subtitle timing)](https://doc-en.302.ai/api-241439484.md): Automatic subtitle timing from Doubao. - Audio/Video Processing > Doubao [fetch(Query Generation Status)](https://doc-en.302.ai/api-241438924.md): Automatic subtitle timing from Doubao,query generation status - Audio/Video Processing > Doubao [vc(Audio and video caption generation)](https://doc-en.302.ai/api-242584438.md): Audio and video caption generation from Doubao - Audio/Video Processing > Doubao [fetch(Query caption result)](https://doc-en.302.ai/api-242593526.md): Automatic caption timing from Doubao, check generation status - Audio/Video Processing > Fish Audio [TTS(Text to Speech)](https://doc-en.302.ai/api-216956018.md): Text-to-speech from Fish Audio - Audio/Video Processing > Fish Audio [Model(Create Voice)](https://doc-en.302.ai/api-216956017.md): Sound cloning from Fish Audio: submit audio files for cloning. - Audio/Video Processing > Fish Audio [Model(Obtain Voice)](https://doc-en.302.ai/api-216956019.md): Sound cloning from Fish Audio - Audio/Video Processing > Fish Audio [Model(Delete Voice)](https://doc-en.302.ai/api-216956021.md): Sound cloning from Fish Audio - Audio/Video Processing > Fish Audio [Model(Update Voice)](https://doc-en.302.ai/api-216956022.md): Sound cloning from Fish Audio - Audio/Video Processing > Fish Audio [Model(Get Voice List)](https://doc-en.302.ai/api-216956020.md): Public Sound List from Fish Audio - Audio/Video Processing > Minimax [T2A(Async extra content generation)](https://doc-en.302.ai/api-224542216.md): Vincennes Tone Frequency from Minimax - Audio/Video Processing > Minimax [T2A(Status Inquiry)](https://doc-en.302.ai/api-225432825.md): Vincennes Tone Frequency from Minimax, - Audio/Video Processing > Minimax [T2V(Create Voice)](https://doc-en.302.ai/api-224542215.md): Vincennes Tone Frequency from Minimax - Audio/Video Processing > Minimax [Files(Audio File Download)](https://doc-en.302.ai/api-224542217.md): Vincennes Video Model from Minimax - Audio/Video Processing > Minimax [Music_Upload(Upload original music)](https://doc-en.302.ai/api-241838739.md): 接口说明 - Audio/Video Processing > Minimax [Music Generation API](https://doc-en.302.ai/api-241838654.md): 接口说明 - Audio/Video Processing > Minimax [T2A (voice generation-synchronization)](https://doc-en.302.ai/api-265968916.md): Text-generated timbre frequencies from Minimax - Audio/Video Processing > Dubbingx [TTS(Text to Speech)](https://doc-en.302.ai/api-225432829.md): Text-to-speech asynchronous interface - Audio/Video Processing > Dubbingx [GetTTSList(Get Voice List)](https://doc-en.302.ai/api-225432826.md): Getting the Tone List - Audio/Video Processing > Dubbingx [GetTTSTask(Get Task Status)](https://doc-en.302.ai/api-225432827.md): Getting Task Status - Audio/Video Processing > Dubbingx [Analyze(emotions)](https://doc-en.302.ai/api-225432828.md): Optionally, analyze sentiment based on the text and return the results. - Audio/Video Processing > Udio [Generate(Music Generation)](https://doc-en.302.ai/api-231604784.md): Enter prompt and generate a song - Audio/Video Processing > Udio [Generate(Music Continuation)](https://doc-en.302.ai/api-232137740.md): A continuation of the song - Audio/Video Processing > Udio [Query](https://doc-en.302.ai/api-231604783.md): Checking Song Generation - Audio/Video Processing > Elevenlabs [Speech-to-text(Speech-to-Text)](https://doc-en.302.ai/api-270141046.md): Speech-to-text from ElevenLabs, featuring the ability to mark applause, laughter, etc. - Audio/Video Processing > Elevenlabs [Speech-to-text(Asynchronously fetch results)](https://doc-en.302.ai/api-270143269.md): Elevenlabs' speech-to-text feature can mark applause, laughter, and more. - Audio/Video Processing > Elevenlabs [TTS-Multilingual-v2(Text-to-Speech)](https://doc-en.302.ai/api-279192639.md): From Elevenlabs Text-to-Speech - Audio/Video Processing > Elevenlabs [TTS-Multilingual-v2(Asynchronous result retrieval)](https://doc-en.302.ai/api-279192970.md): From Elevenlabs Text-to-Speech - Audio/Video Processing > Elevenlabs [TTS-Flash-v2.5(Text-to-Speech)](https://doc-en.302.ai/api-279193033.md): From Elevenlabs Text-to-Speech - Audio/Video Processing > Elevenlabs [TTS-Flash-v2.5(Asynchronous result retrieval)](https://doc-en.302.ai/api-279193050.md): From Elevenlabs Text-to-Speech - Audio/Video Processing > Mureka [Upload Music](https://doc-en.302.ai/api-281661555.md): Upload a file that can be used across multiple different endpoints. - Audio/Video Processing > Mureka [Generate Lyrics from a Prompt](https://doc-en.302.ai/api-281661796.md): Create lyrics based on a given prompt. - Audio/Video Processing > Mureka [Continue writing lyrics from existing lyrics](https://doc-en.302.ai/api-281661968.md): Keep extending the lyrics from the current ones. - Audio/Video Processing > Mureka [Generate a Song from Lyrics](https://doc-en.302.ai/api-281661992.md): Create a song based on provided lyrics. - Audio/Video Processing > Mureka [Retrieve the Generated Song](https://doc-en.302.ai/api-281662029.md): Fetch the generated song. - Audio/Video Processing > Mureka [Separate Music Stems](https://doc-en.302.ai/api-281662109.md): Separate the input music into individual audio elements such as vocals, instruments, etc. - Audio/Video Processing > Mureka [Generate Instrumental Music Track](https://doc-en.302.ai/api-281662147.md): Generate instrumental music based on user input. - Audio/Video Processing > Mureka [Retrieve Instrumental Music Track](https://doc-en.302.ai/api-281662169.md): Generate instrumental music based on user input. - Audio/Video Processing > Mureka [Text-to-Speech](https://doc-en.302.ai/api-281662174.md): Generate audio from input text. - Audio/Video Processing > Mureka [Create Podcast Audio](https://doc-en.302.ai/api-281662201.md): Convert a two-person dialogue script into a natural-sounding, podcast-style audio conversation ready for publishing. - Information Processing > 302.AI > Admin Dashboard [Balance(Account balance)](https://doc-en.302.ai/api-263735171.md): Get the balance of the corresponding 302.AI account - Information Processing > 302.AI > Information search [Xiaohongshu_Search](https://doc-en.302.ai/api-214738984.md): **Price:0.02PTC/call** - Information Processing > 302.AI > Information search [Xiaohongshu_Note](https://doc-en.302.ai/api-214738985.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Get_Home_Recommend](https://doc-en.302.ai/api-238638370.md): **Price:0.01PTC/call** - Information Processing > 302.AI > Information search [Tiktok_Search](https://doc-en.302.ai/api-214738989.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Douyin_Search](https://doc-en.302.ai/api-214738990.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Twitter_Search](https://doc-en.302.ai/api-236434650.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Twitter_Post(X_Post)](https://doc-en.302.ai/api-214738986.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Twitter_User(X_User)](https://doc-en.302.ai/api-214738988.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Weibo_Post](https://doc-en.302.ai/api-214738987.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Search_Video](https://doc-en.302.ai/api-236434477.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Youtube_Info](https://doc-en.302.ai/api-222610812.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Youtube_Subtitles(Youtube Obtain Subtitles)](https://doc-en.302.ai/api-252702346.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [Bilibili_Info(Bilibili Obtain Video Information)](https://doc-en.302.ai/api-252702200.md): **Price:0.001PTC/call** - Information Processing > 302.AI > Information search [MP_Article_List(Get the list of WeChat official account articles)](https://doc-en.302.ai/api-269224841.md): **Price: 0.01 PTC/call** - Information Processing > 302.AI > Information search [MP_Article(Retrieve WeChat Official Account articles)](https://doc-en.302.ai/api-269233916.md): **价格:0.001PTC/次** - Information Processing > 302.AI > File processing [Parsing](https://doc-en.302.ai/api-226751725.md): Converting files to text format for streamlined processing of large-scale models - Information Processing > 302.AI > File processing [Upload-File](https://doc-en.302.ai/api-232502112.md): Upload the file for the LLM to process. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [One-click Code Execution](https://doc-en.302.ai/api-276825891.md): Automatically create a sandbox, and destroy it immediately after execution. Optional feature to export sandbox files (if there are multiple files in the directory, they will be compressed into a zip file for export; a single file will be exported directly). This interface is recommended if continuous sandbox operations are not required. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Create Sandbox](https://doc-en.302.ai/api-276825984.md): After successful creation, the sandbox will automatically pause. When you call other sandbox operation interfaces, the sandbox will automatically reconnect, and after execution, it will pause again to avoid unnecessary costs. (Note: Pausing and reconnecting will take some time, approximately 5 seconds in total.) - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Query Your Sandbox List](https://doc-en.302.ai/api-276826458.md): Sandbox information is bound to the API key, so you can only query the sandbox information associated with the current API key. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Destroy Sandbox](https://doc-en.302.ai/api-276826507.md): **Price:0 PTC/call** - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Run-Code](https://doc-en.302.ai/api-276828474.md): This interface only returns text-type outputs. If the code involves file generation or similar operations, please use the "View File" interface to check file information and the "Export File" interface to export files. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Run Command Line](https://doc-en.302.ai/api-276829298.md): This interface only returns text-type outputs. If the command involves file generation or similar operations, please use the "View File" interface to check file information and the "File Download" interface to export files to the 302 file system. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Query File Information at Specified Path](https://doc-en.302.ai/api-276829634.md): Supports batch queries; you can pass a list of paths. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Import File Data into Sandbox](https://doc-en.302.ai/api-276829674.md): Supports batch import. If a file exists at the save path, it will be overwritten. If the folder at the save path does not exist, it will be automatically created. - Information Processing > 302.AI > Code execution > Virtual Machine Sandbox [Export Sandbox Files](https://doc-en.302.ai/api-276830648.md): Supports Batch Export - Information Processing > 302.AI > Code execution > Static Sandbox [Run-Code](https://doc-en.302.ai/api-239070004.md): Run JS or Python code in a sandbox - Information Processing > 302.AI > Remote Browser [Create Browser Automation Task](https://doc-en.302.ai/api-282235063.md): Create a remote browser automation task based on Browser Use. - Information Processing > 302.AI > Remote Browser [Query Browser Task Status](https://doc-en.302.ai/api-282235713.md): Create a remote browser automation task based on Browser Use. - Information Processing > Tavily [Search](https://doc-en.302.ai/api-207705253.md): Tavily is a company focused on AI search. Their search is optimized for LLMs (Large Language Models) to facilitate data retrieval for them. - Information Processing > Tavily [Extract](https://doc-en.302.ai/api-235295291.md): Tavily is a company focused on AI search. Their search is optimized for LLMs (Large Language Models) to facilitate data retrieval for them. - Information Processing > SearchAPI [Search](https://doc-en.302.ai/api-207705254.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(News)](https://doc-en.302.ai/api-207705257.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(Images)](https://doc-en.302.ai/api-207705258.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(Lens)](https://doc-en.302.ai/api-207705260.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(Videos)](https://doc-en.302.ai/api-207705259.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(Scholar)](https://doc-en.302.ai/api-207705255.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > SearchAPI [Search(Patents)](https://doc-en.302.ai/api-207705256.md): SearchAPI is a company that provides search APIs, allowing for quick and easy access to content from the Google search engine. We are fully aligned with their official interface, so you only need to replace the API Base URL. - Information Processing > Search1API [Search](https://doc-en.302.ai/api-235326268.md): Search1API is a company focused on search, and their distinctive feature is affordable pricing. - Information Processing > Search1API [News](https://doc-en.302.ai/api-235326270.md): Search1API is a company focused on search, and their distinctive feature is affordable pricing. - Information Processing > Search1API [Crawl](https://doc-en.302.ai/api-235326269.md): Search1API is a company focused on search, and their distinctive feature is affordable pricing. - Information Processing > Search1API [Sitemap(Site Map)](https://doc-en.302.ai/api-269263445.md): Search1API is a company focused on search services, and their main feature is affordability. - Information Processing > Search1API [Trending (Popular Trends)](https://doc-en.302.ai/api-276249952.md): Search1API is a company specializing in search, with their standout feature being affordable pricing. - Information Processing > Exa [Search](https://doc-en.302.ai/api-207705275.md): **Exa AI**, an emerging AI search engine company, recently announced that it has raised $17 million in Series A funding, led by Lightspeed with participation from Nvidia's NVentures and Y Combinator. Unlike other search engines, Exa aims to become a dedicated search tool for AI. - Information Processing > Exa [Contents(Get content)](https://doc-en.302.ai/api-207705276.md): **Exa AI**, an emerging AI search engine company, recently announced that it has raised $17 million in Series A funding, led by Lightspeed with participation from Nvidia's NVentures and Y Combinator. Unlike other search engines, Exa aims to become a dedicated search tool for AI. - Information Processing > Exa [Answer](https://doc-en.302.ai/api-276258454.md): Exa AI is an emerging AI search engine company that recently announced a $17 million Series A funding round, led by Lightspeed, with participation from Nvidia's NVentures and Y Combinator. Unlike other search engines, Exa aims to be a dedicated search tool for AI. - Information Processing > Bocha AI [Web-search](https://doc-en.302.ai/api-207705277.md): Invoke AI search to answer user questions, returning multimodal reference sources, summarized answers, and follow-up questions. Reference sources (web pages, TikTok videos, images), summarized answers, follow-up questions. - Information Processing > Bocha AI [Ai-search](https://doc-en.302.ai/api-207705278.md): Utilize AI search to answer user questions, providing multimodal reference sources, summarized answers, and follow-up questions. Reference sources (web pages, TikTok videos, images), summarized answers, follow-up questions. - Information Processing > Doc2x > Version 2 [PDF(Upload - Asynchronous)](https://doc-en.302.ai/api-232502061.md): Upload the PDF to start parsing. - Information Processing > Doc2x > Version 2 [Status(View Status)](https://doc-en.302.ai/api-232502063.md): Check the processing status after uploading the PDF. - Information Processing > Doc2x > Version 2 [Parse(Request Export File - Asynchronous)](https://doc-en.302.ai/api-232502062.md): Export the uploaded PDFs to other formats. - Information Processing > Doc2x > Version 2 [Result(exported results) ](https://doc-en.302.ai/api-232502064.md): Getting the exported results - Information Processing > Doc2x > Version 1 (Deprecated) [PDF(PDF-to-MD)](https://doc-en.302.ai/api-207705261.md): **Convert PDFs to MD format**, from our partner **Doc2x**: https://doc2x.com/ - Information Processing > Doc2x > Version 1 (Deprecated) [PDF-Async](https://doc-en.302.ai/api-207705264.md): **Convert PDFs to any format**, from our partner **Doc2x**: https://doc2x.com/ - Information Processing > Doc2x > Version 1 (Deprecated) [IMG-to-MD](https://doc-en.302.ai/api-207705262.md): **Convert IMG to MD format**, from our partner **Doc2x**: https://doc2x.com/ - Information Processing > Doc2x > Version 1 (Deprecated) [IMG-Async](https://doc-en.302.ai/api-207705263.md): **Convert IMG to any format**, from our partner **Doc2x**: https://doc2x.com/ - Information Processing > Doc2x > Version 1 (Deprecated) [Status](https://doc-en.302.ai/api-207705265.md): Get conversion status. - Information Processing > Doc2x > Version 1 (Deprecated) [Export](https://doc-en.302.ai/api-207705266.md): Export file, optional formats: - Information Processing > Glif [Glif(Bot)](https://doc-en.302.ai/api-207705267.md): **Glif is a bot-building platform similar to Coze**: https://glif.app/ - Information Processing > Jina [Reader(Web Page to Markdown)](https://doc-en.302.ai/api-207705268.md): Feeding web information into LLMs is an important step for grounding, but it can be challenging. The simplest approach is to scrape web pages and provide the raw HTML. However, scraping can be complex and is often blocked, and the raw HTML is filled with extraneous elements like tags and scripts. Reader API addresses these issues by extracting the core content from the URL and converting it into clean, LLM-friendly text, ensuring that your agents and RAG systems receive high-quality input. - Information Processing > Jina [Search](https://doc-en.302.ai/api-207705269.md): LLMs have a knowledge cutoff point, which means they cannot access the most up-to-date world knowledge. This can lead to issues such as misinformation, outdated responses, hallucinations, and other factual errors. For GenAI applications, grounding is absolutely essential. Reader allows you to ground your LLM with the latest information from the web. Simply prepend your query with https://s.jina.ai/, and Reader will search the web and return the top five results along with their URLs and content, each formatted as clean, LLM-friendly text. This way, you can keep your LLM up-to-date, improve its accuracy, and reduce hallucinations. - Information Processing > Jina [Grounding(Verification of Facts)](https://doc-en.302.ai/api-230624831.md): Fact-checking by searching, from Jina - Information Processing > Jina [Classify](https://doc-en.302.ai/api-230629283.md): Categorizing and tagging content using embedding models,from:https://jina.ai/classifier - Information Processing > DeepL [Chat(Translate into English)](https://doc-en.302.ai/api-207705272.md): Translation service from **DeepL** - Information Processing > DeepL [Chat(Translate into Chinese)](https://doc-en.302.ai/api-207705273.md): Translation service from **DeepL** - Information Processing > DeepL [Chat(Translate into Japanese)](https://doc-en.302.ai/api-207705274.md): Translation service from **DeepL** - Information Processing > DeepL [Translate(Translate into various language)](https://doc-en.302.ai/api-214730964.md): Translation service from DeepL - Information Processing > RSSHub [RSSHub](https://doc-en.302.ai/api-235940255.md): RSSHub is an open-source project that converts a wide variety of websites into RSS format data, making it convenient for users to receive updates in a timely manner, including but not limited to public accounts and Xiaohongshu. - Information Processing > Firefly card [saveImg(Card Generation)](https://doc-en.302.ai/api-245142413.md): On the webpage https://fireflycard.shushiai.com/ editing the card,click to copy the JSON and paste it as a body parameter. - Information Processing > Youdao [Youdao(Youdao Translate)](https://doc-en.302.ai/api-250800740.md): Translation API from Youdao - Information Processing > Mistral [OCR(PDF Parsing)](https://doc-en.302.ai/api-270148035.md): PDF parsing by Mistral allows you to quickly convert PDFs into MD. - RAG-related > OpenAI [Embeddings](https://doc-en.302.ai/api-207705279.md): [OpenAI Guide](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) - RAG-related > Jina [Embeddings](https://doc-en.302.ai/api-207705286.md): Embedding model from Jina: https://jina.ai/embeddings - RAG-related > Jina [Rerank](https://doc-en.302.ai/api-207705288.md): Rerank model from Jina:https://jina.ai/rerank - RAG-related > Jina [Rerank(Multimodal Reordering)](https://doc-en.302.ai/api-282201344.md): Jina's multimodal rerank model can simultaneously rank images and text. - RAG-related > Jina [Tokenizer](https://doc-en.302.ai/api-207705287.md): Tokenize model from Jina: https://jina.ai/tokenizer/ - RAG-related > China Model [Embeddings(Zhipu) ](https://doc-en.302.ai/api-207705280.md): Text embeddings are used to measure the relevance between text strings. Embeddings are typically used for: - RAG-related > China Model [Embeddings(BAAI)](https://doc-en.302.ai/api-207705285.md): Support two models: - RAG-related > China Model [Embeddings(Baichuan AI) ](https://doc-en.302.ai/api-207705281.md): Text embeddings are used to measure the relevance between text strings. Embeddings are typically used for: - RAG-related > China Model [Embeddings(Youdao) ](https://doc-en.302.ai/api-207705282.md): Text embeddings are used to measure the relevance between text strings. Embeddings are typically used for: - RAG-related > China Model [Rerank(Youdao) ](https://doc-en.302.ai/api-207705283.md): **Price:0.02 PTC / 1M tokens** - RAG-related > China Model [Rerank(BAAI)](https://doc-en.302.ai/api-207705284.md): **Price:0.02 PTC / 1M tokens** - RAG-related > 302.AI [Chat(with KB)](https://doc-en.302.ai/api-222611742.md): Knowledge base dialogues - RAG-related > 302.AI [Chat(with KB-OpenAI compatible)](https://doc-en.302.ai/api-238962472.md): Knowledge base dialogue: This API is OpenAI-compatible. The API Key for the knowledge base bot will be automatically linked to the knowledge base selected in the backend and cannot use a generic key. - RAG-related > 302.AI [Create(Knowledge Base)](https://doc-en.302.ai/api-222611737.md): Create a Knowledge Base - RAG-related > 302.AI [Delete(Knowledge Base)](https://doc-en.302.ai/api-222611741.md): Delete the specified knowledge base - RAG-related > 302.AI [Upload](https://doc-en.302.ai/api-222611738.md): Uploading files to a specified knowledge base - RAG-related > 302.AI [List(KB)](https://doc-en.302.ai/api-222611739.md): Get a list of knowledge bases - RAG-related > 302.AI [Info](https://doc-en.302.ai/api-222611740.md): Access to Knowledge Base Details - RAG-related > 302.AI [Meta-Chunking(Text LLM slices)](https://doc-en.302.ai/api-245142513.md): Use LLM's text comprehension to segment content, ensuring the slices maintain coherent context - RAG-related > 302.AI [Meta-Chunking(File LLM slices)](https://doc-en.302.ai/api-245142548.md): Use LLM's text comprehension to segment content, ensuring the slices maintain coherent context - Tools API > AI Video Creation Hub [Scripts(Generate Video Content Copy)](https://doc-en.302.ai/api-248602281.md): The API used by the AI video content idea platform: Generate corresponding video scripts based on input keywords. - Tools API > AI Video Creation Hub [Terms(Generate Video Material Search Keywords)](https://doc-en.302.ai/api-248602282.md): The API used by the AI video content idea platform: Generate search keywords for video platforms based on the video theme and script. - Tools API > AI Video Creation Hub [Videos(Create Video Material Generation Task)](https://doc-en.302.ai/api-248602283.md): The API used by the AI video content idea platform: Generate stitched video materials based on the script and search keywords. - Tools API > AI Video Creation Hub [Tasks(Get Video Task Progress)](https://doc-en.302.ai/api-248602284.md): The API used by the AI video content idea platform: Retrieve the progress of a video generation task - Tools API > AI Paper Writing > CO-STORM [Create generate article task](https://doc-en.302.ai/api-258284198.md): Enter topics and concerns to create a generated article task. - Tools API > AI Paper Writing > CO-STORM [Continue to generate dialogue interfaces](https://doc-en.302.ai/api-258284199.md): Simulation dialogue enables information acquisition. Users can enter specified content or let AI continue to generate simulated dialogue based on the above, thereby obtaining more information to generate articles. - Tools API > AI Paper Writing > CO-STORM [Update article content interface](https://doc-en.302.ai/api-258284200.md): After the call continues to generate the dialogue interface, the knowledge base corresponding to task_id will be updated. You can update the article through this interface. - Tools API > AI Paper Writing > CO-STORM [Get article information](https://doc-en.302.ai/api-258284201.md): **Price:0PTC/call** - Tools API > AI Paper Writing [Asynchronous Paper Generate](https://doc-en.302.ai/api-245500983.md): Asynchronous API for Generating Papers in an AI Writing Tool - Tools API > AI Paper Writing [Fetch](https://doc-en.302.ai/api-245500984.md): Fetch Paper Results - Tools API > AI Podcast Production [Asynchronous Generate Podcast Transcripts](https://doc-en.302.ai/api-245500985.md): Interfaces used in AI podcast generation tools. - Tools API > AI Podcast Production [Check the status of podcast text generation task](https://doc-en.302.ai/api-245500986.md): Check task status - Tools API > AI Podcast Production [Asynchronously Generate Podcast Audio](https://doc-en.302.ai/api-245500987.md): Interfaces used in AI podcast generation tools. - Tools API > AI Podcast Production [Check the status of podcast audio generation task](https://doc-en.302.ai/api-245500988.md): Check task status - Tools API > AI Writing Assistant [Get Tools‘ List](https://doc-en.302.ai/api-246101919.md): Get a list of copywriting tools, primarily used for retrieving tool information and the parameter attributes for generating copywriting through APIs. - Tools API > AI Writing Assistant [Generate Copywriting](https://doc-en.302.ai/api-246101920.md): Retrieve the corresponding `tool_name` and `params` from the interface for obtaining the copywriting tool list, and pass them to this interface to generate copywriting. - Tools API > AI Video Real-Time Translation [Query Video Information](https://doc-en.302.ai/api-247577932.md): Retrieve video title, thumbnail, resolution format list, and playback link. - Tools API > AI Video Real-Time Translation [Video Download](https://doc-en.302.ai/api-247577933.md): Download YouTube and Bilibili videos, upload them to a 302 file server, and extract the audio track. - Tools API > AI Video Real-Time Translation [Extract Audio from Video](https://doc-en.302.ai/api-248490188.md): Extract Audio from Video - Tools API > AI Video Real-Time Translation [Audio vocal separation and transcription](https://doc-en.302.ai/api-255892113.md): Use our own optimized whisper model to transcribe audio into word-level text data - Tools API > AI Video Real-Time Translation [Subtitle Translation](https://doc-en.302.ai/api-247577935.md): Input word-level audio transcription results and return translated SRT subtitles. - Tools API > AI Video Real-Time Translation [Video Burning](https://doc-en.302.ai/api-247577936.md): Burn SRT subtitles into the video. - Tools API > AI Video Real-Time Translation [Original sound clone](https://doc-en.302.ai/api-255891913.md): Output new voice according to the original voice and new audio content, suitable for dubbing audio in different languages - Tools API > AI Video Real-Time Translation [Query task status](https://doc-en.302.ai/api-247577937.md): Query task status - Tools API > AI Document Editor [Generate a long text outline](https://doc-en.302.ai/api-258267631.md): **Price: Charge based on the calling model** - Tools API > AI Document Editor [Generate article content](https://doc-en.302.ai/api-258267632.md): Streaming response - Tools API > Web Data Extraction Tool [Generate Schema](https://doc-en.302.ai/api-261542258.md): Generate corresponding Schema through web page and description - Tools API > Web Data Extraction Tool [Create an extraction task](https://doc-en.302.ai/api-261542259.md): Create a web crawling task - Tools API > Web Data Extraction Tool [Query extraction progress](https://doc-en.302.ai/api-261542260.md): Get the progress of web crawling tasks - Tools API > AI Prompt Expert [Prompt Optimization](https://doc-en.302.ai/api-280482594.md): Interface responses are consistent with OpenAI chat model responses, supporting streaming returns. - Tools API > AI Prompt Expert [Image prompt generation](https://doc-en.302.ai/api-261664608.md): Convert a picture into an AI drawing prompt word that is used to generate a picture - Tools API > AI Prompt Expert [Create SPO Prompt Optimization Task](https://doc-en.302.ai/api-279192337.md): Example Usage: - Tools API > AI Prompt Expert [Query SPO Prompt Optimization Results](https://doc-en.302.ai/api-279192443.md): **Price: 0 PTC/call** - Tools API > AI 3D Modeling [3D model file type conversion](https://doc-en.302.ai/api-261680304.md): From open source project: https://github.com/mikedh/trimesh - Tools API > AI Search Master 3.0 [AI Search](https://doc-en.302.ai/api-265291523.md): The search API in Search Master only supports streaming mode and needs to be adapted by yourself. - Tools API > AI Vector Graphics Generation [SVG to video](https://doc-en.302.ai/api-265298816.md): Convert SVG to video of the generation process. - Tools API > Al Answer Machine [Answer](https://doc-en.302.ai/api-261663767.md): Support JSON parameters (content image link or BS64 images (starting from data:image) or title text) and FormData parameters (content binary images) - Tools API > AI PPT Generator [Generate PPT interface with one click](https://doc-en.302.ai/api-265304598.md): ** Price: 0.07PTC/call ** - Tools API > AI PPT Generator [File parsing](https://doc-en.302.ai/api-265304599.md): The generated file link is only effective on the same day - Tools API > AI PPT Generator [Generate an outline](https://doc-en.302.ai/api-265304600.md): **Price: Free** - Tools API > AI PPT Generator [Generate outline content](https://doc-en.302.ai/api-265304601.md): When selecting synchronously to generate PPT, there is no deduction for this interface. Only when synchronously generate PPT interface will the deduction trigger. - Tools API > AI PPT Generator [Get template options](https://doc-en.302.ai/api-265304602.md): **Price: Free** - Tools API > AI PPT Generator [Generate PPT interface (synchronous interface)](https://doc-en.302.ai/api-265304603.md): **Price: 0.07PTC/call** - Tools API > AI PPT Generator [Load PPT data](https://doc-en.302.ai/api-265304604.md): **Price: Free** - Tools API > AI PPT Generator [Generate PPT interface (asynchronous interface)](https://doc-en.302.ai/api-265304605.md): When asyncGenPptx=true is requested to generate outline content, PPT will be generated asynchronously. At this time, there is no need to call the generated PPT interface again. - Tools API > AI PPT Generator [Asynchronous query generates PPT status](https://doc-en.302.ai/api-265304606.md): Note: This interface can only query data (temporary cache data) during streaming generation. The data will fail after 30 seconds of the response. - Tools API > AI PPT Generator [Download PPT](https://doc-en.302.ai/api-265304607.md): **Price: Free** - Tools API > AI PPT Generator [Add/update custom PPT templates](https://doc-en.302.ai/api-265304608.md): The uploaded template will be isolated according to the apikey. When querying the custom template, only the template data uploaded by the corresponding apikey will be returned. - Tools API > AI PPT Generator [Pagination query PPT template](https://doc-en.302.ai/api-265304609.md): **Price: Free** - Tools API > AI Academic Paper Search [arxiv Paper Search](https://doc-en.302.ai/api-265250453.md): Search for arxiv paper and translate title - Tools API > AI Academic Paper Search [Google Paper Search](https://doc-en.302.ai/api-265250454.md): **Price: 0.005PTC/call, no charge for hitting cache** - Tools API > One-Click Website Deployment [Create Hosted Webpage (Form Parameter API)](https://doc-en.302.ai/api-285347675.md): **Price:0.01 PTC/call** - Tools API > One-Click Website Deployment [Create Hosted Webpage (JSON Parameter API) ](https://doc-en.302.ai/api-285348343.md): **Price:0.01 PTC/call** - Tools API > One-Click Website Deployment [Create Hosted Webpage (Binary Parameter API) ](https://doc-en.302.ai/api-285348409.md): The HTML file is placed directly in the request body as binary; other parameters are placed in the query string - Tools API > One-Click Website Deployment [Query the List of Hosted Projects under an API Key](https://doc-en.302.ai/api-285348505.md): Associated via API key.