Launching today

theORQL
Cursor for frontend. Build and debug in Chrome and VS Code.
55 followers
Cursor for frontend. Build and debug in Chrome and VS Code.
55 followers
theORQL is vision-enabled frontend AI. It takes UI screenshots, maps UI → code, triggers real browser interactions, and visually verifies the fix in Chrome before shipping a reviewable diff — so UI fixes land right the first time. 1200+ downloads to date. Download free on VSCode and Cursor.






theORQL
Hey Product Hunt!!!
We built theORQL because most AI coding tools are blind: they generate code that looks right in text, but renders wrong in the browser.
theORQL closes the loop between your UI and your codebase:
takes screenshots of the UI (full page + elements)
reads DOM + computed styles + network + console
maps a UI element to the owning component (via source maps)
applies a change, visually verifies it in the browser, then gives you a reviewable diff (no auto-commit)
If you try it, what should we focus on next: layout/CSS issues, state bugs, or flaky/hard-to-repro bugs?
And what’s one workflow you’d pay to never do manually again?
Adjust Page Brightness - Smart Control
this is one of the greatest products i have ever seen on product hunt, very helpful for developers like me
theORQL
@kshitij_mishra4 Thank you so much!! What is the biggest pain you're having in your workflow? We want to help :saluting_face:
The problem isn’t “AI can’t code frontend.” It’s that most AI is blind. It can only guess from text and patterns, then hope the UI renders the way you meant.
I've been using theORQL for the last couple of months. I've actually written some articles and created some videos about it as well, but now I'm very impressed with 2 of the new features:
Vision: theORQL can actually see the UI (screenshots) and verify changes in Chrome
Auto Repro → Fix → Verify loop for the really tough bugs (theORQL will actually click buttons, resize the page, fill forms, etc., to reproduce bugs and fix them)
Debugging is the proof case. If you can reproduce a bug, you can fix it; the hard part is getting to a stable repro and the right evidence.
theORQL runs an Auto Repro → Fix → Verify loop: trigger the UI flow (clicks, fills, resizes), capture evidence (screenshots + runtime signals), propose a fix, then re-run and visually confirm it’s gone.
It’s not autonomous chaos. It ships a reviewable diff and never auto-commits. Developers stay in control.
In conclusion:
⚠️ What makes this different from Copilot/Cursor: they’re great at text-in/text-out. theORQL is UI-in/code-out, because it can actually see what rendered.
🔑 What this unlocks: faster frontend iteration, fewer “tweak → refresh” loops, and more trust that the change actually worked before you merge it.
🤝 The bet: the next step for AI dev tools isn’t bigger models. It’s closing the verification loop with vision, interaction, and real runtime evidence.
theORQL
@eleftheria_batsou Wow thank you Eleftheria! So great to hear from you here and thanks for your support. We're building even more features for frontend devs now. If you have any you'd like to see please let us know in the comments!