Vy by Vercept

Vy by Vercept

AI agent that uses your computer, cross platform, no APIs

4.6
5 reviews

152 followers

Vy is a cross platform AI agent for Mac and Windows that uses your computer like a real assistant. It clicks, types and navigates your apps without any API integrations.
This is the 2nd launch from Vy by Vercept. View more

Vy

Launched this week
AI agent that uses your computer, cross platform, no APIs
Experience Vy, a new era of human-computer interaction. No more clicking, memorizing shortcuts, or navigating menus. Just tell Vy what you need, and watch the magic happen.
Vy gallery image
Vy gallery image
Vy gallery image
Vy gallery image
Vy gallery image
Free Options
Launch Team / Built With
AppSignal
AppSignal
Built for dev teams, not Fortune 500s.
Promoted

What do you think? …

kiana ehsani
Hi HN, Kiana here, CEO and co-founder of Vercept the company behind Vy. I have spent about a decade doing AI research, and for the last year I have been working on Vy full time. We built it because we kept running into the same gap over and over again. A lot of real work still happens in UIs that either have no APIs or only partial ones, and existing automation tools break down as soon as you leave a single app or a clean web flow. Vy is a desktop app for Mac and Windows. It sees what is on your screen and controls the mouse and keyboard, so it can automate workflows across native apps and browsers without relying on APIs, DOM selectors, or app specific integrations. You can watch every step, pause it, take over manually, and turn successful runs into reusable workflows. I am skeptical of a lot of agent demos in this space, and we try to be very explicit about where Vy works and where it does not. It works best for bounded tasks that a human could explain in a paragraph and complete in a few minutes, especially workflows that involve clicking, typing, scrolling, and copying across a small number of familiar tools. It does not work well for very long unsupervised runs, pixel precise creative work, or some highly dynamic custom UIs. When it fails, it usually fails in visible ways, which is important to us. This is still early. We have real users relying on it, but we are very much in the stage of tightening reliability, understanding failure modes, and learning which workflows should or should not be delegated to a UI level agent. I am happy to answer detailed questions about the technical approach, tradeoffs versus API based automation, and places where this approach breaks down. I am also very interested in concrete examples of tasks where you think this would be genuinely useful or clearly not trustworthy. Thanks for taking a look and for any feedback.