Intuned Agent - Production browser automation, built and maintained by AI
by•
Intuned Agent is an AI agent that builds and maintains production browser automations. Describe the scraper, crawler, or RPA workflow you need, and it writes Playwright code, validates it on the live site, and deploys it to Intuned to run at scale. It also helps debug, update, and maintain automations as websites change.



Replies
Intuned
Hey Product Hunt 👋! I’m Faisal, one of Intuned’s co-founders.
Today we’re launching Intuned Agent: an AI agent that builds and maintains production browser automations.
What makes Intuned Agent different is that it is built into Intuned’s browser automation infrastructure. It can use platform capabilities like auth, stealth, CAPTCHAs, proxies, schedules, retries, and observability, then inspect failed runs, traces, logs, and screenshots to debug issues and write fixes when websites change.
When we first launched Intuned, it was a code-first platform for building and running browser automations.
But almost immediately, customers started asking us for the same thing:
“Can you just build and maintain these automations for us?”
So we did. Over the past year, we ran a services motion alongside the product and processed 40M automation runs (20M mins) for a limited set of customers.
We spent more than a year trying to turn that “solutions engineer” workflow into product. The unlock was embedding Claude Code through the Claude Agent SDK directly into Intuned.
The most exciting feature that Intuned Agent unlocked is self-healing. When a project fails, if self-healing is enabled, Intuned can detect the issue, the agent can inspect the failed run, diagnose what changed, write a fix, and redeploy! with you controlling how much autonomy it gets.
That’s the core idea behind Intuned Agent: you still get real code you can inspect, own, and run in production, but the agent helps with the painful parts of building, debugging, and maintaining it.
To make this more concrete, we made a 5-minute video walking through Intuned, Intuned Agent, and how the platform works: watch it here.
Would love feedback from anyone building browser automations in production. Head over to Intuned and start building for free!
Intuned
Hey Product Hunt!
Nasser here, part of the team behind Intuned Agent.
Building Intuned Agent took us way longer than we expected to get right.
Our first attempt was a setup where the chatbot would collect requirements, then hand everything off to a rigid pipeline. One step to discover the site, another to structure the data, and a final step trying to group everything and fix whatever broke along the way. It looked clean on paper, but fell apart on real websites.
The biggest signal was that our own solution engineering team (who help customers build and maintain automations) didn't want to use it. It was too rigid for the kind of messy problems they deal with.
We tried to improve that setup for a while and learned a lot, but it became clear the approach itself was the problem.
When we started using Claude Code for generic coding tasks, we liked how it worked. Its fluid, end-to-end approach felt like the right direction for us to adopt. That's what pushed us toward rebuilding with the Anthropic Agent SDK.
We took all the browser automation knowledge we'd built over time and turned it into reusable skills, then let a single agent drive the process instead of forcing it through a fixed path.
Intuned
Hey Product Hunt! I’m Rauf, one of the engineers behind Intuned’s agent runtime and UX.
I’ve spent the last 3 months focused on building the user experience for Intuned Agent. One thing became obvious quickly: the hard part is not getting the model to take actions, it is making the entire workflow understandable, controllable, and reliable.
An agent like this is never just “a chat.” There is conversation state, code state, browser state, session state, billing state, and human approval state, all moving at once. The UI has to make that legible: dense enough to be useful for builders, calm enough that it does not become stressful.
Most of the real work is the product layer around the model: session runtime, queueing and turn control, reconnect and resume flows, interruption handling, human-in-the-loop approval, sandboxing, and recovery paths for when the agent stops, stalls, or crashes mid-task.
The standard I kept coming back to was simple: can people actually use this with confidence, and can we sleep while they do?
The visible surface is “an AI agent that helps you build browser automation.” The real work was making it dependable enough that builders would use it on a real project, not just admire it in a demo.
Intuned
Hey Product Hunt! I’m Omar Bishtawi, and I lead the platform team here at Intuned.
One thing that makes Intuned Agent different from a normal chat-based coding agent is the kind of work it has to do.
Agent sessions can be long-running. They drive real browsers, inspect live websites, write code, run tests, debug failures, and sometimes keep working after the user has walked away. That creates a very different infrastructure problem: sessions need to be resilient, interactive, fast to start, and secure.
A few things we built to support that:
Resilient sessions
Agent sessions are dynamic and unpredictable in their resource needs. We continuously checkpoint session state so the agent can resume mid-task after any interruptions like machine failures, OOMs, network hiccups, or client disconnects.
Live feedback that survives reconnects
We stream the live browser and agent activity in real time, while making the session reconnect-safe. Close your laptop, open it again, and you can pick up where you left off.
Fast startup without wasting compute
Idle machines cost money, but cold starts hurt the experience. We use microVMs and adaptive warm pools to get from “click run” to “agent working” in a couple of seconds, without keeping too much idle capacity around.
Security end to end
We think a lot about what it means to let an agent act on your behalf. That includes how we store shared information, limiting the agent’s access, sandboxing actions, and running each agent session in its own microVM with a dedicated kernel.
Most of this infrastructure is invisible when it works, and very visible when it does not. Excited to see what people build with Intuned Agent.
Intuned
Hello everyone, I’m Omar, a software engineer at Intuned.
When developers build browser automations in Intuned, they do more than write code. They test against live sites, debug failures, deploy updates, monitor runs and jobs, and inspect browser traces and logs. We wanted Intuned Agent to be able to do that same workflow inside Intuned.
We already had pieces of this across the dashboard and APIs, but they were built for humans and apps, not for an agent trying to complete an end-to-end workflow. The agent needed one interface that could manage the platform, discover what actions were available, and operate reliably across build, debug, deploy, and maintenance tasks.
That led us to build the Intuned CLI.
With the CLI, the agent can operate Intuned through the same tool engineers use. It can explore commands with --help, discover the right workflow, run jobs, inspect results, and interact with the platform in a predictable way.
We also added a hook system so we can tailor behavior for the agent, provide guidance, and handle specific scenarios without making the core interface messy.
The result is a cleaner interface for the agent, and a more powerful workflow for engineers building with Intuned.
Get your frist scraper, RPA or crawler up and running within minutes with the Intuned Agent!
Intuned
Hey there Product Hunt 👋
I'm Izzat, one of the engineers building Intuned Agent.
Anyone who has worked on browser automation knows how quirky things get: iframes, CAPTCHAs, selectors that break the next day, infinite scroll, logged-in sessions that expire...
What makes Intuned Agent exceptional is the browser-native harness we've spent the last few months building around it. The agent handles all of these cases naturally while writing the automation code.
Customers love it, and it keeps surprising them. It happens regularly where the agent spots a hidden network endpoint mid-exploration and turns what would've been a few-hour scraping task into a single API call. This is one example of many.
We're offering free credits to try it out. Give it your most challenging browser task and let us know what happens 🙌
Intuned
I’m Ahmad, co-founder of Intuned, and I lead the team working on Intuned Agent.
Customers use Intuned Agent to build all kinds of browser automations, all running on Intuned’s production infrastructure with proxies, stealth, CAPTCHA handling, auth, scheduling, and observability built in.
Here are a few real-world use cases we’ve seen on Intuned:
• Government data extraction across 50 sources for a B2B SaaS data company, including RFPs, solicitations, and meeting minutes. Live in days.
• E-commerce catalog migration for teams launching dozens of stores a week, moving products, prices, images, and descriptions into a clean target schema.
• A tech jobs aggregator that pulls thousands of open roles from hundreds of company hiring pages.
• Insurance quote collection from auto and home insurance providers for a price comparison platform.
This is where Intuned really shines: scale. One customer now runs close to 1,000 automation projects every day. Intuned Agent monitors failures, patches code, and redeploys fixes, so their engineers only review what it cannot resolve.
Drop a public site you’ve been meaning to scrape or automate, and I’ll run it through Intuned Agent. Happy to share what comes out.
Filliny
Faisal congrats on the launch. self-healing is the thing I keep getting tripped up on in our own browser-automation pipeline. curious how Intuned tunes the trigger threshold. is the detection mostly on assertion failures (selector missing / element not in expected state), or is there a separate heuristic that watches for layout drift across runs even when the script technically completes? false positives waste agent runs, false negatives strand users, and we've found the threshold is the brittle part of the whole loop.
Intuned
@whateverneveranywhere Hi Ava, thank you for your question! So, what we do here is as follows:
As automations run, we monitor few metrics, including: failure rate, run duration, output size, and run count.
When one of these metrics changes, we create an anomaly.
Before promoting the anomaly to an actual issue that we should fix, the agent does a cheap analysis to figure out if this is a false positive or not.
If there is an actual issue, the agent will promote it as an issue, and then a fix is proposed by the Intuned agent.
Running this self-healing loop on thousands of projects for our clients, it ended up being very accurate and cheap. There is always a risk of false positives and false negatives, but the way the Intuned agent is to close the loop between the agent and the platform and test in a production environment which will guarantee that your automation is running and yielding results.
Here is more documentation about how self-healing works:
https://intunedhq.com/docs/main/02-intuned-agent/self-healing-projects
Filliny
thanks Ahmad, this is a great breakdown. the docs link is helpful too. the metrics approach makes sense for end-to-end run success. the layer I was curious about specifically: within a multi-step run, when step N's screenshot/AX-tree no longer matches what the LLM planned at step 1 because of a re-render or layout shift, does the agent re-derive the plan from a fresh observation or trust the original sequence and retry on individual step failure? our biggest debugging time goes into that mid-run drift case, where the run technically completes but produces wrong output because step 3's interpretation of step 1's screenshot has gone stale.
Intuned
While fixing the raised issue, the Intuned agent is able to:
- Explore
- Plan
- Codegen
- End to end test
This makes it flexible in making new observations and act accordingly. However as you mentioned this works because we deal with end to end runs success, runtime, output size... For cases where a step X fails we recommend a flexible automation.
Checkout our cookbook example https://github.com/Intuned/cookbook/tree/main/typescript-examples/rpa-forms-example
And our docs https://intunedhq.com/docs/main/02-features/flexible-automation#flexible-automations
Intuned
Hey hunters 👋 I'm Mohamed Khalil, Solutions Engineer at Intuned — but before this, I spent ~3 years in the scraping trenches, running pipelines across 5,000+ sites (e-comm, gov procurement, real estate, the usual chaos).
Most of those years were rebuilding the same things over and over: proxy rotation, fingerprint spoofing, captcha flows, the selector that breaks every Tuesday. So when I joined and saw what the agent does, my reaction was very much *"oh, this is the layer I wish I had two years ago."*
It isn't that it clicks buttons — plenty of things click buttons. It's that it handles the boring fight: bot detection, long-running sessions, and recovery when a flow drifts. The stuff that usually eats 80% of a scraping engineer's week.
Excited to see what you all build with it 🚀
a most 'AI web scrapers' fails because they ignore the boring stuff like proxie, stealth, and retries. building the agent directlly into your existing automaton infra is a smart move. it means the agent isn't just writing code, it's actually managing the execution environment. definitely checking out the claude sdk integration @ahmad_ilaiwi