Reviews praise QA.tech for quickly mapping app workflows, generating useful test suites, and catching bugs that traditional scripts miss. Users highlight intuitive setup, live agent interactions that ease fine-tuning, and faster regression cycles that help teams ship with confidence. Several note dependable coverage and clear bug reports. Critiques focus on performance lag and onboarding friction, with one serious callout about plaintext password exposure that needs urgent remediation. Overall sentiment is strongly positive, especially for startups and fast-moving teams, provided security and speed improve.
RiteKit Company Logo API
Congrats on the launch! This is a compelling take on QA automation. I'm curious about how QA.tech handles the nuances of testing products with complex user flows or those requiring specific business logic validation. Do your agents learn from existing test cases, or do they generate test scenarios from scratch by exploring the product?
QA.tech
@osakasaulΒ Thanks! We can certainly test complex user flows since we support complex dependency chains and data passing between tests. We generate tests using all available information, such as graph data from your product and existing test cases. With every new test and run, we crawl your product even deeper, ensuring our knowledge of it is constantly growing.
PR review is the bottleneck every team complains about but no one fixes. Curious what the false-positive rate looks like in practice β does the agent flag cosmetic changes (whitespace, renames) as issues, or only real regressions? Congrats on launch π
QA.tech
@mariomonteiroΒ Thanks for your support! We can flag some cosmetics if you prompt for it but we focus on real regressions. False positives can happen but it's more likely with false negatives although we try to filter those and just mention somethings were left untested if our agents can't figure it out.
DiffSense
Ahh that sounds cool. so is it like: 1. Login commit change. 2. AI writes tests to focus on login UX 3. Runs tests in headless CI browser. 4. Shhows result in pr?
QA.tech
@conduit_designΒ Yes pretty much it - and then we write a comprehensive review and suggestions on things that could be improved!
QA.tech
@conduit_designΒ You got it! Feel free to try it out.
DiffSense
@patricklefΒ When will pricing be announced?
how do you handle flaky tests eating the pr signal?
every team i've seen roll out automated pr checks ends up with devs reflexively re-running until green, at which point the whole system is just theater. wondering whats the stance is
QA.tech
@saad_el_gueddariΒ We let the agents assess the results before posting to the user so we know it's real issues actually impacted by the code changes and not unrelated.
QA.tech
"That's weird. It works on my machine." If I had a cent for every time I've heard that, I wouldn't need to work anymore.
We're fixing one of the most painful parts of development: testing new things while they're still being built.
From the developer side, you want fast feedback, clear repro steps, and all of it while you're still in the zone. From the quality side, we bring your requirements into one place and help make sure every PR meets the bar, whether it was written by a human or generated by AI.
Built on Vercel (vercel functions, ai-sdk, nextjs hosted on vercel) and works out of the box with Environments and Preview Deployments.
Give it a try and let me know what you think!
@daniel_mauno_petterssonΒ For AI-generated PRs, how does it auto-generate those repro steps or requirements from just a prompt?
QA.tech
@dayal_punjabiΒ We do a few things:
- We look at the changed files, the PR description
- If you connect Linear or Jira, we can fetch the corresponding ticket and read the specs
- We use your existing test cases and the data we have scraped from your product to generate better tests
running AI regression + exploratory tests on every preview deploy is such a good use of agent time β way more valuable than yet another chat ui. how flaky are the runs in practice? curious if you've had to build in any self-healing or retry logic for transient UI stuff.
An idea for developing your service: create separate agents for each knowledge block. For example, an agent that tests SEO optimization, an agent that tests microformats, an agent that tests readiness for sharing on social networks, etc. We once created testing checklists with such βsmall details,β and ended up with 400 parameters that need to be checked in every project. It is extremely difficult to manually verify all of this.