/agent by Firecrawl - Gather structured data wherever it lives on the web
by•
Firecrawl /agent is a magic API that searches, navigates, and gathers data from even the most complex websites. Describe what data you want and agent handles the rest. Find information in hard-to-reach places, return single datapoints or entire datasets at scale.



Replies
Firecrawl
@ericciarla congrats on the launch! when will it be available via n8n?
Shadow
@ericciarla I still remember spending days writing and testing web scraping code. The younger generation is lucky they'll never know that struggle! haha. Can't wait to try this out. Congrats!
@ericciarla This is compelling! How do you balance autonomy vs. control for developers, specifically, how can users constrain scope, cost, and sources when the agent is discovering paths and URLs on its own at scale?
Andi
@ericciarla Congrats on the launch. What a cap to the year! In awe of the useful features you've all shipped this year.
Awesome stuff Eric! We've been using the structured data in the single URL scraping data for a while with great results so excited to checking out how much deeper /agent can go and what info we can unlock
Just tried the playground on a gnarly site — got usable Markdown fast and it saved me time. Excited for the n8n/Zapier hookups you mentioned. Keep shipping! 🔥
Firecrawl
@_ivan1 amazing! n8n is live and zapier is coming soon
Swytchcode
Really interesting. I would love to use it.
Where is the scraped data stored and how do you chunk it?
Firecrawl
@chilarai It is stored wherever you want it to be stored like in Supabase and you can chunk it yourself
Swytchcode
@ericciarla I have a real use case. Anyway I was about to look for a tool like this. Glad that I found a good one
CodeBanana
Huge fan!
Firecrawl
@zethleezd Thanks Zeth! Hope you enjoy it
Lamatic.ai
Congrats team
Firecrawl
@vrijraj Thank you!
Really huge fan of your product. Perfect!
This is certainly an interesting idea. We have previously built some data mining capabilities using agents as well.
In fact, if you use Claude Code and provide it with sufficient website access, a terminal, a browser, or even its built-in web search tools, it seems capable of retrieving the same unstructured information you want. The only extra step is converting that into structured data. How does your solution differ from this approach?
In my opinion, the most critical factor is how to turn the entire extraction process into a standardized script. This is the true purpose of a web crawler:
1. To take repetitive tasks that would otherwise be costly and labor-intensive.
2. To use script-based automation to lower the cost of repeated crawls.
3. To maintain the ability to update incremental data efficiently.
How are you thinking about this particular aspect?
Triforce Todos
I really like the autonomous search and navigation idea.
One thing that could be powerful for teams is transparency, will users be able to see or replay how the agent found the data, especially for debugging or trust?