Launched this week
Zavi AI - Voice to Action OS
Voice that types, edits, sees and takes action in every app.
399 followers
Voice that types, edits, sees and takes action in every app.
399 followers
Live on iOS. Android. Mac. Windows. Linux. No credit card. Most voice tools just transcribe. Zavi types, edits, and takes action. And it's free. Speak naturally — clean grammar, zero filler words. 50+ languages. Any app. Magic Wand: Highlight text, say "make this shorter" or "translate to Spanish" — rewritten in place. Agent Mode: "Email Sarah about the meeting" — sent via Gmail. "Post in #general" — posted to Slack. GitHub, Notion, Calendar, WhatsApp & 20+ more. Just speak. Zavi does it






Zavi AI - Voice to Action OS
Live on iOS, Android, Mac, Windows and Linux at https://www.zavivoice.com/download (Free & No Credit Card)
Hey everyone, I'm Raman, one of the makers of Zavi.
I started building this because I was spending way too much time typing things I could just say out loud. Every voice typing tool I tried felt like a rough draft that I'd have to go back and fix anyway. So I built one that actually gets it right the first time. Clean grammar, no ums, no filler words.
But then I kept pushing it further and that's where it got interesting.
Magic Wand: You highlight any text in any app, tap the wand, and just say what you want. "Make this shorter." "Translate to French." "Make it sound more professional." It rewrites the text right there in place. No copying, no switching apps.
Agent Mode: You say "email Sarah about the meeting tomorrow" and it actually sends through your Gmail. "Post in general on Slack that I'll be late" and it does it. It connects to Gmail, Slack, GitHub, Notion, LinkedIn, Telegram and more.
It also does real time translation across 50+ languages. You speak in English and it types in Japanese or Spanish or whatever you pick.
Works on iOS, Mac, Android, Windows and Linux. Completely free, no ads, no credit card required. We just want people to use it.
Would love for you to try it and tell me what you think. Happy to answer any questions here all day!
@ramangoyal Hey, quick follow-up I managed to fix the activation key issue, but now I'm getting a message asking me to authenticate the app. Is there a specific step I need to complete to get past this? I'd love to start using it properly. Thanks!
Zavi AI - Voice to Action OS
@martingebara
Hi! Great that the activation key is working now! 🎉
The authentication prompt you're seeing is likely the macOS permissions step. Zavi AI needs a few system permissions to work properly:
Go to System Settings → Privacy & Security → Accessibility and toggle on Zavi AI
You may also need to grant Screen Recording and/or Microphone access in the same section
After enabling permissions, restart the app
If you're seeing a different type of authentication prompt, could you send me a screenshot? I'll get you sorted right away!
Thanks!
The framing is right — most voice tools stop at transcription, and Zavi pushing into actual execution is where the product gets interesting. The hard problem in Agent Mode isn't execution itself though, it's intent disambiguation: "email Sarah about the meeting tomorrow" has several valid interpretations depending on context (which Sarah? what specifically about the meeting? what tone?). Curious how Zavi handles underspecified commands — does it ask a clarifying question inline, proceed with a best guess and confirm, or surface the ambiguity in some other way?
Zavi AI - Voice to Action OS
@giammbo
Zavi AI - Voice to Action OS
@giammbo
You’re spot on — intent disambiguation is the real challenge in Agent Mode.
If you say something like “email Sarah about the meeting tomorrow” and Zavi can’t confidently resolve which Sarah you mean, it does not execute blindly.
Instead, it responds with something like:
“I couldn’t find a clear match for Sarah. Do you mean Sarah Mehta (Marketing) or Sarah Lee (Design)?”
If there’s genuinely no match, it simply says it couldn’t find any Sarah and doesn’t proceed without clear instructions.
We’ve intentionally designed it to default to safety and confirmation over assumption, especially for actions like email, calendar, or file changes.
Execution only happens once the intent is unambiguous.
@ramangoyal That disambiguation UX — surfacing "Sarah Mehta (Marketing) or Sarah Lee (Design)?" with role context — is exactly the right pattern, much better than a generic list or a silent failure. The follow-up question for me is memory: after I've picked Sarah Lee twice for meeting-related emails, does Zavi start weighting that preference, or does it always confirm to stay safe?
Zavi AI - Voice to Action OS
@giammbo Exactly the right question.
We’re building a lightweight preference memory layer on top of disambiguation.
If you explicitly pick “Sarah Lee (Design)” for meeting-related emails more than once, Zavi starts assigning a higher confidence weight to that pairing in similar contexts. So next time, if the intent and context match, it can default to Sarah Lee and show a subtle confirmation, rather than a full clarification prompt.
However, it does not hard lock. If:
• the context shifts significantly
• another Sarah becomes more relevant
• confidence drops below a threshold
Zavi falls back to explicit disambiguation.
So the model evolves from:
Always confirm → Contextual default with soft confirmation → Explicit disambiguation when uncertainty rises.
The goal is to reduce friction without ever silently guessing on high-ambiguity actions like email or calendar.
@ramangoyal That three-stage arc — always confirm → soft default → explicit when uncertain — is exactly how trust should be earned in agents, not assumed. The middle state is the key design insight: enough friction to stay auditable, not so much it becomes noise. Really well thought through — looking forward to seeing how Zavi evolves.
@ramangoyal @hsyvy
Please find below inputs use in Ios app:
1. Please add google and apple logo in login page.
2.When I click on open keyboard setting, it goes to apps instead of zavi>>Keyboard
3.Bug-Microphone toggle missing in setting>>app>zavi>> , when click open setting it goes to app but didn't find the microphone icon.
4.No3 is blocking here, please make complete testing then launch the product.
5.I'm uninstalling the app.
Zavi AI - Voice to Action OS
@hsyvy @niravpl41 Hi Nirav,
We have rechecked this on multiple iOS devices and are not able to reproduce the microphone toggle issue under Settings → Privacy & Security → Microphone or Settings → Zavi.
It is possible this may be device-specific or related to a previous permission state. Could you please confirm:
• iPhone model
• iOS version
• Whether microphone permission was previously denied
• A screen recording of the issue
Regarding the keyboard redirect, it should open Settings → Zavi → Keyboards. We are verifying the deep link behavior again.
We take blocking issues very seriously and will resolve immediately if reproducible.
Thank you.
Zavi AI - Voice to Action OS
@hsyvy @niravpl41 Hi Nirav,
We have rechecked this on multiple iOS devices and are not able to reproduce the microphone toggle issue under Settings → Privacy & Security → Microphone or Settings → Zavi.
It is possible this may be device-specific or related to a previous permission state. Could you please confirm:
• iPhone model
• iOS version
• Whether microphone permission was previously denied
• A screen recording of the issue
Regarding the keyboard redirect, it should open Settings → Zavi → Keyboards. We are verifying the deep link behavior again.
We take blocking issues very seriously and will resolve immediately if reproducible.
Thank you.
How are you validating real user behavior at Zavi Ai right now?
Zavi AI - Voice to Action OS
@danilpond Great question.
We’re validating real user behavior through product usage signals rather than vanity metrics, and we’re doing it in a privacy-first way.
1. Action-based validation, not just dictation
We measure how many users go beyond simple voice typing and use higher intent features like:
Magic Wand edits such as “make this shorter” or “rewrite professionally”
Agent Mode actions such as sending an email or posting in Slack
Actual execution of tasks across apps is our strongest validation signal.
2. Completion rates
For agent workflows, we track whether the action is successfully completed end-to-end. A spoken command that results in a successfully sent email is a very different signal from just generating draft text.
3. Repeat and retention
We monitor 1-day and 7-day repeat usage, and the number of voice actions per session. Habit formation across multiple apps is the key metric for us at this stage.
4. Cross-app depth
We look at how many different tools a user connects and executes actions in. If Zavi becomes a layer across Gmail, Slack, Notion, etc., that indicates real workflow adoption.
All of this is done through anonymized event-level analytics. We do not inspect user content or track private data inside connected apps.
Since we’re early and free right now, our primary validation is execution volume, retention, and workflow depth rather than revenue.
@danilpond could you clarify yourself, what do you mean when you say validating real user behavior??
Saw your post on Reddit, and wanted to congratulate you on an epic launch! Zavi looks great. I saw in the comments below you mentioned execution only happens once intent is inambiguous. How is this handled for file execution commands, is a preview of the proposed changes displayed prior to the execution?
Zavi AI - Voice to Action OS
@deadlygrateful Great question.
Execution only happens once intent is confidently resolved. For file-related commands, if there’s any ambiguity, Zavi either asks a quick clarifying question or shows a confirmation step.
For actions that modify files, we present a preview of the proposed changes before execution. Nothing is silently overwritten. You can review, confirm, or cancel.
So the flow is: detect intent → resolve ambiguity → preview changes → execute.
Himanshu here, co-maker of Zavi. Super excited to finally share this with you all today.
Raman nailed the introduction, so I'll just add that building the agentic flows that actually do the work has been an incredible journey. It is amazing to send an email or update Slack using just your voice. We built Zavi to be the ultimate productivity multiplier, and we want everyone to experience this new way of interacting with their devices.
Drop your questions, feature requests, or any edge cases you find. We're here all day to chat!
@hsyvy Great job Himanshu and Raman — hats off for the speed and accuracy of interpretation in this initial version! I even tried it in Hindi, and it was working fantastic — that’s a huge achievement.
A couple of thoughts from my side as you continue refining:
• Integration flow: Right now, needing to open the app each time adds a step. A more seamless integration would make the experience even smoother.
• Language support: Extending to Hinglish could be a big win, especially for users who naturally mix Hindi and English in daily communication.
• Microphone access: Currently it asks for full-time access; restricting it to “while using” would reassure users on the security front.
Overall, this is a fantastic start — excited to see how Zavi evolves from here!
Zavi AI - Voice to Action OS
@hsyvy @ricoman
Thank you so much for the detailed feedback and for testing it in Hindi — that means a lot.
Really glad to hear the interpretation speed and accuracy stood out, especially across languages.
On your points:
• Integration flow: Completely agree. Reducing friction is a big priority for us. We are actively working on making invocation more seamless so it feels like a native layer, not an app you have to “open.”
• Hinglish support: Strong point. A lot of real world communication is mixed language, and we’re improving our multilingual blending models to better handle natural Hindi English switches.
• Microphone permissions: Fair concern. We ask for broader access today to support hands free activation and agent workflows, but we’re exploring tighter permission scopes and clearer controls to increase user trust.
Appreciate you taking the time to go this deep. Feedback like this genuinely helps shape the roadmap.
Thanks Rohan for valuable feedbacks. Currently we support Hinglish, but the support is not very clear from UI which we will definitely try to improve that by upcoming release.
me in 2010: 'voice control will never work'
me in 2026: verbally arguing with my computer about comma placement
Zavi AI - Voice to Action OS
@ilya_lee Plot twist: it understood you perfectly in 2010. It was just waiting for better microphones and your grammar to evolve.