Launching today
theORQL is vision-enabled frontend AI. It takes UI screenshots, maps UI → code, triggers real browser interactions, and visually verifies the fix in Chrome before shipping a reviewable diff — so UI fixes land right the first time. 1200+ downloads to date. Download free on VSCode and Cursor.







theORQL
Hey Product Hunt!!!
We built theORQL because most AI coding tools are blind: they generate code that looks right in text, but renders wrong in the browser.
theORQL closes the loop between your UI and your codebase:
takes screenshots of the UI (full page + elements)
reads DOM + computed styles + network + console
maps a UI element to the owning component (via source maps)
applies a change, visually verifies it in the browser, then gives you a reviewable diff (no auto-commit)
If you try it, what should we focus on next: layout/CSS issues, state bugs, or flaky/hard-to-repro bugs?
And what’s one workflow you’d pay to never do manually again?
I'm very keen to try this, do you think this would have a problem with more complex UI flows using gestures (click and hold etc)? I've been working with React flow for a node interface, and debugging problems with that library is such a pain, especially when it comes to adding features like drag and drop. Would love to hear anyone's experience with this.
theORQL
@haxybaxy Thanks for your comment Zaid, and yes gesture-heavy flows (drag/drop, click-and-hold, resize handles, canvas-style UIs like React Flow) are exactly where text-only AI tends to fall apart, because the “bug” is usually in the interaction + state timing, not just the code.
theORQL can reliably reproduce the gesture and capture the right evidence (UI screenshots + DOM/state signals + console/network) while it’s happening. Simple interactions (clicks, typing, resizes) are straightforward today; more complex gestures can be trickier depending on how the library implements pointer events and what needs to be simulated.
If you’re up for it, I’d love to learn a bit:
Is it HTML/SVG/canvas in your case?
What’s the specific pain point: drag not starting, drop target logic, node position/state desync, edge routing, or performance/jank?
We can try it against your React Flow and you can see what theORQL can reproduce/verify right now (you can install it free right now and I'm happy to give you a live demo too)
I can't think of a better debugging tool than this.. you simply stay on your browser and the tool does the debugging
Been using it for awhile now and really appreciate the good work from the team
theORQL
@nobert_ayesiga Thank you so much Wise!!!! Happy that you've been using theORQL already. I'm curious what's the most useful workflow so far?
Adjust Page Brightness - Smart Control
this is one of the greatest products i have ever seen on product hunt, very helpful for developers like me
theORQL
@kshitij_mishra4 Thank you so much!! What is the biggest pain you're having in your workflow? We want to help :saluting_face:
The problem isn’t “AI can’t code frontend.” It’s that most AI is blind. It can only guess from text and patterns, then hope the UI renders the way you meant.
I've been using theORQL for the last couple of months. I've actually written some articles and created some videos about it as well, but now I'm very impressed with 2 of the new features:
Vision: theORQL can actually see the UI (screenshots) and verify changes in Chrome
Auto Repro → Fix → Verify loop for the really tough bugs (theORQL will actually click buttons, resize the page, fill forms, etc., to reproduce bugs and fix them)
Debugging is the proof case. If you can reproduce a bug, you can fix it; the hard part is getting to a stable repro and the right evidence.
theORQL runs an Auto Repro → Fix → Verify loop: trigger the UI flow (clicks, fills, resizes), capture evidence (screenshots + runtime signals), propose a fix, then re-run and visually confirm it’s gone.
It’s not autonomous chaos. It ships a reviewable diff and never auto-commits. Developers stay in control.
In conclusion:
⚠️ What makes this different from Copilot/Cursor: they’re great at text-in/text-out. theORQL is UI-in/code-out, because it can actually see what rendered.
🔑 What this unlocks: faster frontend iteration, fewer “tweak → refresh” loops, and more trust that the change actually worked before you merge it.
🤝 The bet: the next step for AI dev tools isn’t bigger models. It’s closing the verification loop with vision, interaction, and real runtime evidence.
theORQL
@eleftheria_batsou Wow thank you Eleftheria! So great to hear from you here and thanks for your support. We're building even more features for frontend devs now. If you have any you'd like to see please let us know in the comments!
Learnify
Such a helpful project for developers. Really like using it.
Congratulations on the launch 🎉
theORQL
@shefali_j07 Thank you so much!! If we could add one killer feature for you what would it be?
Congratulations on the launch. Will try this out today.
theORQL
@ahmednabik Amazing thank you for your support! Try it out and would love to hear about your experience.