Launched this week
Papercuts

Papercuts

Deploy AI agents to use your production app like a real user

81 followers

Deploy AI agents that flow through your production app like a real user. Just provide a URL and get notified when something breaks.
Papercuts gallery image
Papercuts gallery image
Papercuts gallery image
Launch Team
Turbotic Automation AI
Turbotic Automation AI
Build powerful automations without code. 1 Month Free!
Promoted

What do you think? …

Sayuj Suresh
Yooo Product Hunters! I built Papercuts because I think most testing scripts are blind. They check the DOM, but they don't actually see if the UI is broken for the user. Modern apps are way too complex for brittle selectors. I believe the only way to be safe is to test in production with AI agents that actually perceive and navigate the page like a human. Let me know what you think!
Imtiyaz

@sayuj_suresh DOM-based tests miss real user pain all the time. Testing with agents that actually see and navigate the UI feels like the natural next step for modern apps.

I am seeing the same shift while building Curatora. Systems that observe real outcomes, not just internal states, catch issues much earlier. Curious how teams adopt this in production.

Sayuj Suresh

@imtiyazmohammed Totally agree, that’s been my experience as well. DOM-based tests are great for verifying assumptions, but they often miss how the product actually feels to a user.

We’re seeing teams adopt this gradually starting with a few critical flows in production and using agents as signal alongside existing monitoring.

Curious Kitty
Most teams already have some mix of Playwright/Cypress tests plus APM/RUM—what’s the clearest line you draw between those and Papercuts, and what’s the switching trigger that makes it worth adding (or replacing) another layer?
Sayuj Suresh

@curiouskitty Great question. i don’t see Papercuts as replacing Playwright/Cypress or APM/RUM they solve different problems.

Scripted tests verify expected behavior in controlled environments, and APM/RUM tell you when something is already broken for real users. Papercuts sits in between: agents continuously exercise real production flows and catch UX and logic regressions before they show up in dashboards or support tickets.

The usual trigger is when teams realise tests are green, metrics look fine, but users still hit papercuts , broken edge cases, conditional flows, or subtle UI regressions that no one explicitly tested for. That’s where adding agents starts paying off fast.

Ethan Brooks

Congrats on the launch! This looks super useful.

As the founder of Dashform, I know that complex, multi-step forms are often the hardest part to test reliably.

Small question how does your agent handle dynamic form fields or conditional logic (e.g., fields that only appear after a specific selection)? Does it adapt well if the DOM changes slightly?

Sayuj Suresh

@openaigpt5 Hey Ethan, thank you!

The agents don’t rely on brittle selectors or fixed scripts instead uses only vision. They interact with forms the way a real user would: observing visible fields, understanding labels and context, and reacting to what appears next. When a selection reveals new fields, the agent incorporates those into the context and move forward.

That’s one of the main reasons we use agents instead of traditional test scripts.

Give it a try with the free plan!