- Large Language Model
- API Migration Guide
- Exclusive Feature
- Model Support
- OpenAI
- Chat(Talk)
- Responses(Talk)
- Chat(Streamed return.)
- Chat (gpt-4o Image Analysis)
- Chat (gpt-4o Structured Output)
- Chat (gpt-4o function call)
- Chat (gpt-4-plus image analysis)
- Chat (gpt-4-plus image generation)
- Chat(gpt-4o-image-generation modify image)
- Chat (gpts model)
- Chat (chatgpt-4o-latest)
- Chat (o1 Series Model)
- Chat (o3 Series Model)
- Chat(o4 Series)
- Chat(gpt-4o audio model)
- Anthropic
- Gemini
- China Model
- Chat (Baidu ERNIE)
- Chat (Tongyi Qianwen)
- Chat (Tongyi Qianwen-VL)
- Chat(Tongyi Qianwen-OCR)
- Chat (Zhipu GLM-4)
- Chat (Zhipu GLM-4V)
- Chat (Baichuan AI)
- Chat (Moonshot AI)
- Chat (Moonshot AI-Vision)
- Chat (01.AI)
- Chat (01.AI-VL)
- Chat (DeepSeek)
- Chat (DeepSeek-VL2)
- Chat (ByteDance Doubao)
- Chat (ByteDance Doubao-Vision)
- Chat(ByteDance Doubao Image Generation)
- Chat (Stepfun)
- Chat (Stepfun Multimodal)
- Chat (iFLYTEK Spark)
- Chat (SenseTime)
- Chat(Minimax)
- Chat (Tencent Hunyuan)
- SiliconFlow
- PPIO
- Open Source Model
- Expert Model
- Other Models
- Image Generation
- Unified interface
- GPT-Image-1
- DALL.E
- Stability.ai
- Text-to-image (Image Generation-V1)
- Generate (Image Generation-SD2)
- Generate (Image Generation-SD3-Ultra)
- Generate (Image Generation-SD3)
- Generate(Image Generation-SD3.5-Large)
- Generate(Image Generation-SD3.5-Medium)
- Generate(Image to Image-SD3)
- Generate(Image to Image-SD3.5-Large)
- Generate(Image to Image-SD3.5-Medium)
- Midjourney
- Midjourney-Relax
- 302.AI
- Glif
- Flux
- Ideogram
- Recraft
- Luma
- Doubao
- Minimax
- ZHIPU
- Baidu
- Hidream
- Bagel
- SiliconFlow
- Image Processing
- 302.AI
- Upscale
- Upscale-V2
- Upscale-V3
- Upscale-V4
- Super-Upscale
- Super-Upscale-V2
- Face-upscale
- Colorize
- Colorize-V2
- Removebg
- Removebg-V2
- Removebg-V3
- Inpaint
- Erase
- Face-to-many
- Llava
- Relight
- Relight-background
- Relight-V2
- Face-swap-V2
- Fetch
- HtmltoPng
- SvgToPng
- image-translate
- image-translate-query
- image-translate-redo
- Flux-selfie
- Trellis(Image to 3D model)
- Pose-Transfer(Human Pose Transformation)
- Pose-Transfer(Human Pose Transformation Result)
- Virtual-Tryon
- Virtual-Tryon(Fetch Result)
- Denoise(AI Denoising)
- Deblur(AI Deblurring)
- 302.AI-ComfyUI
- Create Outfit Change Task
- Create Outfit Change Task (Upload Mask)
- Query Outfit Change Task Status
- Create Face Swap Task
- Query Face Swap Task Status
- Create a Task to Replace Any Item
- Create Object Replacement Task (Upload Mask)
- Check the Status of Any Object Replacement Task
- Create a Task to Transform Cartoon Characters into Real People
- Query the status of the task to turn a manga character into a real person
- Create Style Transfer Task
- Query the status of the style transfer task
- Create Image Removal Task
- Query Image Removal Task Status
- Create Video Face Swap Task
- Query Video Face Swap Task Status
- Vectorizer
- Stability.ai
- Glif
- Clipdrop
- Recraft
- BRIA
- Flux
- 官方API
- Flux-V1.1-Ultra-Redux(Image-to-image generation-Ultra)
- Flux-V1.1-Pro-Redux(Image-to-image generation-Pro)
- Flux-Dev-Redux(Image-to-image generation-Dev)
- Flux-Schnell-Redux(Image-to-image generation-Schnell)
- Flux-V1-Pro-Canny(Object consistency)
- Flux-V1-Pro-Depth(Depth consistency)
- Flux-V1-Pro-Fill(Partial repainting)
- Flux-Kontext-Pro(Image Edit)
- Flux-Kontext-Max(Image Edit)
- Hyper3D
- Tripo3D
- FASHN
- Ideogram
- Doubao
- Kling
- StepFun
- Bagel
- 302.AI
- Video Generation
- Unified Interface
- 302.AI
- Stable Diffusion
- Luma AI
- Runway
- Kling
- 302 format
- Txt2Video(Text to Video 1.0 Rapid-5s)
- Txt2Video_HQ(Text to Video 1.5 HQ-5s)
- Txt2Video_HQ(Text to Video 1.5 HQ-10s)
- Image2Video(Image to Video 1.0 Rapid-5s)
- Image2Video(Image to Video 1.0 Rapid-10s)
- Image2Video(Image to Video 1.5 Rapid-5s)
- Image2Video(Image to Video 1.5 Rapid-10s)
- Image2Video_HQ(Image to Video 1.5 HQ-5s)
- Image2Video_HQ(Image to Video 1.5 HQ-10s)
- Txt2Video(Text to Video 1.6 Standard-5s)
- Txt2Video(Text to Video 1.6 Standard-10s)
- Txt2Video(Text to Video 1.6 HQ-5s)
- Txt2Video(Text to Video 1.6 HQ-10s)
- Image2Video(Image to Video 1.6 Standard-5s)
- Image2Video(Image to Video 1.6 Standard-10s)
- Image2Video(Image to Video 1.6 HQ-5s)
- Image2Video(Image to Video 1.6 HQ-10s)
- Txt2Video(Text-to-Video 2.0 – HD – 5s)
- Image2Video(Image-to-Video 2.0 – HD – 5s)
- Image2Video(Image-to-Video 2.0 – HD – 10s)
- Image2Video (Multiple pictures for reference)
- Image2Video(Multiple pictures for reference)
- Extend_Video
- Image2Video(Image video 2.1-5 seconds)
- Image2Video(Image video 2.1-10 seconds)
- Image2Video(Image Video 2.1-HD-10 seconds)
- Image2Video(Image Video 2.1-HD-5 seconds)
- Fetch
- Official format
- 302 format
- CogVideoX
- Minimax
- Pika
- 1.5 pikaffects(Image-to-Video Generation)
- Turbo Generate(Text-to-Video Generation)
- Turbo Generate(Text-to-Video Generation)
- 2.1 Generate(Text-to-Video Generation)
- 2.1 Generate(Image-to-Video Generation)
- 2.2 Generate(Text-to-Video Generation)
- 2.2 Generate(Image-to-Video Generation)
- 2.2 Pikascenes(Generate scene videos)
- Fetch(Result)
- PixVerse
- Genmo
- Hedra
- Haiper
- Sync.
- Lightricks
- Hunyuan
- Vidu
- Vidu(Text-to-Video)
- Vidu(Image to Video)
- Vidu(Generate video from the first and last frames)
- Vidu(Reference-based video generation)
- Vidu(Generate scene video)
- Vidu(Smart Ultra HD)
- Fetch(Retrieve Task Results)
- Vidu V2(Text-to-Video Generation)
- Vidu V2(Image-to-Video)
- Vidu V2(Start-and-End Frame Video Generation)
- Vidu V2(Subject-Driven Video Generation)
- Vidu(Scene Video Generation V2)
- Vidu V2(AI Ultra HD – Premium)
- Fetch V2(Retrieve Task Result)
- Tongyi Wanxiang
- Jimeng
- SiliconFlow
- Kunlun Tech
- Higgsfield
- Audio/Video Processing
- Unified interface
- 302.AI
- Stable-Audio(instrumental generation)
- Transcript (Audio/Video to Text)
- Transcriptions(Speech to Text)
- Alignments(Subtitle Timing)
- WhisperX
- F5-TTS(Text to Speech)
- F5-TTS (Asynchronous Text-to-Speech)
- F5-TTS (Asynchronously Retrieve Results)
- mmaudio(Text-to-Speech)
- mmaudio(AI Video Voiceover)
- mmaudio (Asynchronous Result Retrieval)
- Diffrhythm(Song Generation)
- OpenAI
- Azure
- Suno
- Doubao
- Fish Audio
- Minimax
- Dubbingx
- Udio
- Elevenlabs
- Mureka
- SiliconFlow
- Information Processing
- Unified Search API
- 302.AI
- Admin Dashboard
- Information search
- Xiaohongshu_Search
- Xiaohongshu_Note
- Get_Home_Recommend
- Tiktok_Search
- Douyin_Search
- Twitter_Search
- Twitter_Post(X_Post)
- Twitter_User(X_User)
- Weibo_Post
- Search_Video
- Youtube_Info
- Youtube_Subtitles(Youtube Obtain Subtitles)
- Bilibili_Info(Bilibili Obtain Video Information)
- MP_Article_List(Get the list of WeChat official account articles)
- MP_Article(Retrieve WeChat Official Account articles)
- File processing
- Code execution
- Remote Browser
- Tavily
- SearchAPI
- Search1API
- Exa
- Bocha AI
- Doc2x
- Glif
- Jina
- DeepL
- RSSHub
- Firefly card
- Youdao
- Mistral
- Firecrawl
- RAG-related
- Tools API
- AI Video Creation Hub
- AI Paper Writing
- AI Podcast Production
- AI Writing Assistant
- AI Video Real-Time Translation
- AI Document Editor
- Web Data Extraction Tool
- AI Prompt Expert
- AI 3D Modeling
- AI Search Master 3.0
- AI Vector Graphics Generation
- Al Answer Machine
- AI PPT Generator
- Generate PPT interface with one click
- File parsing
- Generate an outline
- Generate outline content
- Get template options
- Generate PPT interface (synchronous interface)
- Load PPT data
- Generate PPT interface (asynchronous interface)
- Asynchronous query generates PPT status
- Download PPT
- Add/update custom PPT templates
- Pagination query PPT template
- AI Academic Paper Search
- One-Click Website Deployment
- AI Avatar Maker
- AI Card Generation
- AI Image Creative Station API
- Help Center
Map
POST
/firecrawl/v1/map
Official Documentation:https://docs.firecrawl.dev/api-reference/endpoint/map
Request
Header Params
Authorization
string
optional
Example:
Bearer {{YOUR_API_KEY}}
Body Params application/json
url
string
required
search
string
optional
ignoreSitemap
boolean
optional
sitemapOnly
boolean
optional
includeSubdomains
boolean
optional
limit
integer
optional
timeout
integer
optional
formats
array[string]
optional
onlyMainContent
boolean
optional
includeTags
array[string]
optional
excludeTags
array[string]
optional
headers
object
optional
waitFor
integer
optional
mobile
boolean
optional
skipTlsVerification
boolean
optional
jsonOptions
object
optional
schema
object
required
systemPrompt
string
required
prompt
string
required
actions
array [object {3}]
optional
type
string
optional
milliseconds
integer
optional
selector
string
optional
location
object
optional
country
string
required
languages
array[string]
required
removeBase64Images
boolean
optional
blockAds
boolean
optional
proxy
string
optional
changeTrackingOptions
object
optional
mode
string
required
schema
object
required
prompt
string
required
Example
{
"url": "<string>",
"search": "<string>",
"ignoreSitemap": true,
"sitemapOnly": false,
"includeSubdomains": false,
"limit": 5000,
"timeout": 123
}
Request samples
Shell
JavaScript
Java
Swift
Go
PHP
Python
HTTP
C
C#
Objective-C
Ruby
OCaml
Dart
R
Request Request Example
Shell
JavaScript
Java
Swift
curl --location --request POST 'https://api.302.ai/firecrawl/v1/map' \
--header 'Authorization: Bearer sk-jls4AaVBGoe1GwZD64qZA1qyKTN1MPHa4NmvH1cT68z7K1Zz' \
--header 'Content-Type: application/json' \
--data-raw '{
"url": "<string>",
"search": "<string>",
"ignoreSitemap": true,
"sitemapOnly": false,
"includeSubdomains": false,
"limit": 5000,
"timeout": 123
}'
Responses
🟢200成功
application/json
Body
pages
array [object {4}]
required
index
integer
required
markdown
string
required
images
array [object {6}]
required
dimensions
object
required
model
string
required
usage_info
object
required
pages_processed
integer
required
doc_size_bytes
integer
required
Example
{ "pages": [ { "index": 0, "markdown": "# LEVERAGING UNLABELED DATA TO PREDICT OUT-OF-DISTRIBUTION PERFORMANCE \n\nSaurabh Garg*<br>Carnegie Mellon University<br>sgarg2@andrew.cmu.edu<br>Sivaraman Balakrishnan<br>Carnegie Mellon University<br>sbalakri@andrew.cmu.edu<br>Zachary C. Lipton<br>Carnegie Mellon University<br>zlipton@andrew.cmu.edu\n\n## Behnam Neyshabur\n\nGoogle Research, Blueshift team\nneyshabur@google.com\n\nHanie Sedghi<br>Google Research, Brain team<br>hsedghi@google.com\n\n\n#### Abstract\n\nReal-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions that may cause performance drops. In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (WILDS, ImageNet, BREEDS, CIFAR, and MNIST). In our experiments, ATC estimates target performance $2-4 \\times$ more accurately than prior methods. We also explore the theoretical foundations of the problem, proving that, in general, identifying the accuracy is just as hard as identifying the optimal predictor and thus, the efficacy of any method rests upon (perhaps unstated) assumptions on the nature of the shift. Finally, analyzing our method on some toy distributions, we provide insights concerning when it works ${ }^{1}$.\n\n\n## 1 INTRODUCTION\n\nMachine learning models deployed in the real world typically encounter examples from previously unseen distributions. While the IID assumption enables us to evaluate models using held-out data from the source distribution (from which training data is sampled), this estimate is no longer valid in presence of a distribution shift. Moreover, under such shifts, model accuracy tends to degrade (Szegedy et al., 2014; Recht et al., 2019; Koh et al., 2021). Commonly, the only data available to the practitioner are a labeled training set (source) and unlabeled deployment-time data which makes the problem more difficult. In this setting, detecting shifts in the distribution of covariates is known to be possible (but difficult) in theory (Ramdas et al., 2015), and in practice (Rabanser et al., 2018). However, producing an optimal predictor using only labeled source and unlabeled target data is well-known to be impossible absent further assumptions (Ben-David et al., 2010; Lipton et al., 2018).\n\nTwo vital questions that remain are: (i) the precise conditions under which we can estimate a classifier's target-domain accuracy; and (ii) which methods are most practically useful. To begin, the straightforward way to assess the performance of a model under distribution shift would be to collect labeled (target domain) examples and then to evaluate the model on that data. However, collecting fresh labeled data from the target distribution is prohibitively expensive and time-consuming, especially if the target distribution is non-stationary. Hence, instead of using labeled data, we aim to use unlabeled data from the target distribution, that is comparatively abundant, to predict model performance. Note that in this work, our focus is not to improve performance on the target but, rather, to estimate the accuracy on the target for a given classifier.\n\n[^0]\n[^0]: * Work done in part while Saurabh Garg was interning at Google\n ${ }^{1}$ Code is available at https://github.com/saurabhgarg1996/ATC_code.", "images": [], "dimensions": { "dpi": 200, "height": 2200, "width": 1700 } }, { "index": 1, "markdown": "\n\nFigure 1: Illustration of our proposed method ATC. Left: using source domain validation data, we identify a threshold on a score (e.g. negative entropy) computed on model confidence such that fraction of examples above the threshold matches the validation set accuracy. ATC estimates accuracy on unlabeled target data as the fraction of examples with the score above the threshold. Interestingly, this threshold yields accurate estimates on a wide set of target distributions resulting from natural and synthetic shifts. Right: Efficacy of ATC over previously proposed approaches on our testbed with a post-hoc calibrated model. To obtain errors on the same scale, we rescale all errors with Average Confidence (AC) error. Lower estimation error is better. See Table 1 for exact numbers and comparison on various types of distribution shift. See Sec. 5 for details
Modified at 2025-04-24 02:20:57