fmerian

Visual PR Testing with AI - Validate every PR with AI that runs tests for you

by
QA.tech runs dynamic regression and exploratory testing on every PR preview – automatically. AI agents validate your changes against real user flows in a real browser, posting results back to the PR before anyone reviews or merges. Every failure comes with screenshots, logs, and network activity so your team debugs fast. Push a new commit and it re-runs. Merge only when tests pass.

Add a comment

Replies

Best
Daniel Mauno Pettersson

"That's weird. It works on my machine." If I had a cent for every time I've heard that, I wouldn't need to work anymore.

We're fixing one of the most painful parts of development: testing new things while they're still being built.

From the developer side, you want fast feedback, clear repro steps, and all of it while you're still in the zone. From the quality side, we bring your requirements into one place and help make sure every PR meets the bar, whether it was written by a human or generated by AI.

Built on Vercel (vercel functions, ai-sdk, nextjs hosted on vercel) and works out of the box with Environments and Preview Deployments.

Give it a try and let me know what you think!

DAYAL PUNJABI

@daniel_mauno_pettersson For AI-generated PRs, how does it auto-generate those repro steps or requirements from just a prompt?

Daniel Mauno Pettersson

@dayal_punjabi We do a few things:
- We look at the changed files, the PR description
- If you connect Linear or Jira, we can fetch the corresponding ticket and read the specs
- We use your existing test cases and the data we have scraped from your product to generate better tests

Mario Monteiro

PR review is the bottleneck every team complains about but no one fixes. Curious what the false-positive rate looks like in practice — does the agent flag cosmetic changes (whitespace, renames) as issues, or only real regressions? Congrats on launch 🚀

Patrick Lef

@mariomonteiro Thanks for your support! We can flag some cosmetics if you prompt for it but we focus on real regressions. False positives can happen but it's more likely with false negatives although we try to filter those and just mention somethings were left untested if our agents can't figure it out.

André J

Ahh that sounds cool. so is it like: 1. Login commit change. 2. AI writes tests to focus on login UX 3. Runs tests in headless CI browser. 4. Shhows result in pr?

Daniel Mauno Pettersson

@conduit_design Yes pretty much it - and then we write a comprehensive review and suggestions on things that could be improved!

Patrick Lef

@conduit_design You got it! Feel free to try it out.

André J

@patricklef When will pricing be announced?

Saad El Gueddari

how do you handle flaky tests eating the pr signal?

every team i've seen roll out automated pr checks ends up with devs reflexively re-running until green, at which point the whole system is just theater. wondering whats the stance is

Patrick Lef

@saad_el_gueddari We let the agents assess the results before posting to the user so we know it's real issues actually impacted by the code changes and not unrelated.

Saul Fleischman

Congrats on the launch! This is a compelling take on QA automation. I'm curious about how QA.tech handles the nuances of testing products with complex user flows or those requiring specific business logic validation. Do your agents learn from existing test cases, or do they generate test scenarios from scratch by exploring the product?

Tobias Törnros

@osakasaul Thanks! We can certainly test complex user flows since we support complex dependency chains and data passing between tests. We generate tests using all available information, such as graph data from your product and existing test cases. With every new test and run, we crawl your product even deeper, ensuring our knowledge of it is constantly growing.

Tijo Gaucher

running AI regression + exploratory tests on every preview deploy is such a good use of agent time — way more valuable than yet another chat ui. how flaky are the runs in practice? curious if you've had to build in any self-healing or retry logic for transient UI stuff.

Natalia Iankovych

An idea for developing your service: create separate agents for each knowledge block. For example, an agent that tests SEO optimization, an agent that tests microformats, an agent that tests readiness for sharing on social networks, etc. We once created testing checklists with such “small details,” and ended up with 400 parameters that need to be checked in every project. It is extremely difficult to manually verify all of this.