The first version taught us a lot. What worked, what didn t, where people got stuck, and where things felt magical. We ve rebuilt a lot of Velo from the ground up based on that.
Before we hit launch, I want to ask this community directly:
What would you expect from a product like Velo today?
Velo is trying to make it effortless to express ideas through video, using your screen, your voice, and AI, without needing to be good at making videos.
Velo
Hey Product Hunt community! 👋
I'm Ajay, co-founder & CTO of Velo. A few weeks ago, we launched Velo 1.0 here - an AI tool that turns a raw screen recording into a polished, share-ready video message.
The response blew us away. But one thing kept coming up in the feedback: "It works, but it doesn't feel effortless yet."
That stayed with us. For a tool built to make communication easier, "not effortless" is the whole problem.
So we went ideated, iterated, and are finally shipping Velo 2.0 here today.
Here's everything about Velo 2.0
Chat-native interface: Chat with Velo to shape it into a polished video - add narration, edit scripts, add voice clones, effects, and more, all through conversation.
Streaming processing: Processes as you record, noticeably faster previews
Create video messages + documentation: Record once, get a polished video and a structured written document out of the same session
New voice model: Emotive, human, captures tone and energy, not just acoustics
Silent video handling: No audio in your video? Velo understands the context and generates the script instead of failing
Video to Doc: One click turns any video into a written article
Context-aware script rewriter: Set your persona and audience, Velo rewrites the narration
We've been using 2.0 internally for weeks. It's the first version where I make a video and don't feel like something's slightly off.
Please try it out and let us know what your thoughts are: usevelo.ai
We're all ears. Thank you for your support.
@ajaykumar1018 As someone who’s hunted nearly 500 products, I’ve noticed a familiar pattern on Product Hunt: many makers launch, politely acknowledge feedback, and then either take far too long to act on it or never act on it at all.
The Velo team has been the exact opposite... they took every piece of feedback from our very first call and first launch seriously, and they’ve already shipped it.
Today, I’m genuinely excited to introduce Velo 2.0. Everything the PH community asked for just last month is already live in the product, making @Velo more powerful than ever in under a month.
Why I endorse Velo?
Having seen countless video AI products launch on Product Hunt, I believe this one has the potential to be the best of them. I’m proud to endorse it... for the team behind it, for their pace of innovation, and for how completely it covers everything you need to create videos.
My favorite use cases:
My favorite ways to use Velo are for demo videos, product walkthroughs, tutorials, SOP recordings for my team, and more. Give it a try today, and tell us how we can make Velo even more useful for you! :)
Velo
@ajaykumar1018 @rohanrecommends Thank you so much Rohan
Velo
@rohanrecommends Really appreciate this, Rohan. We got a ton of clear feedback after the first launch and just tried to move fast on it.
Velo 2.0 is very much shaped by that. Thanks for the support and for hunting us again 🙌
PicWish
@ajaykumar1018 massive potential for sales ICPs here. any plans to build api so we can autolog generated docs into CRM?
Velo
@ajaykumar1018 @mohsinproduct Yep, MCPs and API's all coming soon
Velo
@mohsinproduct That's on our roadmap :)
Visla
@ajaykumar1018 Congrats on the launch!
RiteKit Company Logo API
@ajaykumar1018 The chat-native interface is a smart move—feels like you're solving the real friction point from 1.0 feedback. The streaming processing + simultaneous doc generation combo is genuinely useful for teams that need both formats. Curious how the context-aware script generation handles niche technical workflows or industry-specific terminology.
Chat-native editing on raw screen recordings is a meaningfully better mental model than timeline editing for B2B creators. I produce a finance podcast on the side (the ModeLoop podcast) and the bottleneck is never the conversation — it's the post: trim, intro, captions, clip extraction. Voice cloning + script rewriting could collapse 80% of that into a single chat turn for podcasters who want to repurpose long-form audio into 60s teaser videos. Curious whether Velo handles audio-only sources or strictly screen + voice, and whether the script editor preserves speaker turns?
Velo
@samir_asadov Screen and voice for now, but this is a super interesting insight for us.
Velo
@samir_asadov Great insight, thanks Samir.
what happens when the pdf you're turning into a video has a lot of charts and diagrams? does it describe them, skip them, or try to animate them somehow?
Velo
Velo
@igorsorokinua It doesn’t skip charts, Velo tries to interpret and explain them as part of the narrative. We use it for research papers internally, made this super quick for you: https://app.usevelo.ai/share/0ffd9297-2258-4414-bfdf-5498fc0c17fb
Been recording walkthroughs for the team and retyping the same stuff into notion forever. Does it handle longer demos cleanly or is it tuned more for short messages?
Velo
@ermakovich_sergey You can stretch it as much as you want, the longest I have done is 44 minutes
Velo
@ermakovich_sergey We’ve seen people use Velo for lectures and longer tutorials too. It handles longer demos quite well, would love for you to try it with your workflow
@ajaykumar1018 @sourav_sanyal thanks for your replies, guys!
Velo
@anusuya_bhuyan You make a voice clone, so even if you have a flight taking off right next to you, your whole voice is re rendered on the video
Velo
@anusuya_bhuyan We analyze diction, tone, and emphasis in your voice to automatically refine the video. For silent segments, our AI uses the surrounding context to decide how they should flow
Velo
Hey PH community
The mission behind Velo has always been the same - help people communicate better through video, without needing to be a video person.
No camera setup. No re-recordings. No editor. Just you explaining something clearly, and AI doing the rest.
Velo 2.0 gets us meaningfully closer to that. The chat-native interface means anyone can start without a learning curve. The new PDF flow actually captures what you want to say, not just what's on the slide.
The voice model sounds like a human having a conversation. And features like Take Control and Video to Doc open up use cases we couldn't touch before.
What can you expect when you try it:
A starting experience that guides you through Velo
Videos that sound like you - not a robot reading your script
PDFs that become narrated videos in minutes
A written doc from every video, with one click
Faster previews, sharper output, less waiting
We're a small team building something we genuinely believe in. Every launch on PH has shaped what Velo is. This one is no different.
Try it. Break it. Tell us what you think. We're here all day and open to all your feedback and thoughts.
Appreciate your support.
And to everyone who used V1 and stuck around - thank you.
Velo
@sourav_sanyal what I’m most excited about in this launch is how easy it is to get started. You don’t have to figure anything out, just prompt what you want, and Velo takes it from there.
A lot of this came directly from feedback on V1, really glad we could bring that into Velo 2.0 today
the chat-native editing is what got me. curious how it handles more complex edits - like if i want to cut a 3-minute section and re-order two others, is that still a conversation or does it get clunky?
congrats!
Velo