AI clipping agent for long-form video. Paste a YouTube link, upload a podcast, or call the API, and reap autonomously finds viral moments, writes animated captions in 98+ languages, translate in 100+ languages (Hinglish, Arabic, Spanish, and more), dubs in 80+ languages, reframes speakers for TikTok/ Reels/Shorts, and schedules the result to every platform. Same agent in the app, the REST API, a CLI, and an MCP server, so Claude, Cursor, and ChatGPT can run the whole pipeline with one prompt.
This is the 3rd launch from reap. View more
reap Agent
Launching today
AI clipping agent for long-form video. Paste a YouTube link,
upload a podcast, or call the API, and reap autonomously finds viral
moments, writes animated captions in 98+ languages, translation in 100+ languages (Hinglish, Arabic, Spanish, and more), dubs in 80+ languages, reframes speakers for TikTok/ Reels/Shorts, and schedules the result to every platform. Same agent in the app, the REST API, a CLI, and an MCP server, so Claude, Cursor, and ChatGPT can run the whole pipeline with one prompt.







Free Options
Launch Team / Built With




reap
Hey Product Hunt 👋 I’m Usama, founder of reap.
Most AI clipping tools are still generators.
You press a button, get 10 clips, open all 10, fix all 10, then schedule all 10. The AI changed where the work happens, but you are still doing the work.
reap is built as an agent-first video editor: autonomous, programmable, and end-to-end.
Point it at a YouTube channel or podcast feed and walk away. reap can clip, caption, reframe, dub, apply brand templates, and publish across platforms without turning every video into a manual review queue.
This only works if the output is reliable. The number we obsess over: on our test corpus, 95%+ of the clips reap ships are usable as-is.
No re-cropping.
No caption fixes.
No re-cutting the hook.
No re-timing.
Upload, come back, post.
Why the output is cleaner:
• Moment selection is multi-signal, not loudness-based. Faces, vocal tone, pauses, pacing, and topic relevance are scored before a frame gets cut.
• Speaker-aware reframing uses real-time tracking, not center-crop, so faces stay in frame even when people move or speakers switch.
• Captions are rendered, not just generated, with 50+ brand-ready presets and support for 98+ caption languages, 100+ translation languages, and 80+ dubbing languages.
• Romanized scripts like Hinglish, Arabizi, and Romanized Urdu are supported properly, which most tools still miss.
• Voice-matched AI dubbing works in the same pipeline and keeps speaker pacing instead of flattening everything into narrator voice.
The bigger shift is that reap is not just cutting clips. We are building a prompt-first, programmable video editor.
Other tools mostly find moments and export them in sequence. At best, they stitch a few gaps together.
reap lets you orchestrate nonlinear edits from the source: trailers, teasers, supercuts, multilingual clips, branded variants, and platform-specific versions assembled from any segment in the video, in any order.
That means you can use reap less like a manual editor and more like video infrastructure.
Three ways people use it:
Set it and forget it
Point the agent at your YouTube channel or podcast RSS. It clips every new episode, captions in your selected languages, applies your brand templates, reframes for each platform, and schedules to TikTok, Reels, Shorts, LinkedIn, and X.
Call it like an API or CLI
Use REST, CLI, and webhooks to batch videos, publish clips, apply templates, and build your own workflows. Every paid plan from $9.99/mo includes API and CLI access. No enterprise gate.
Drive it from Claude, Cursor, or ChatGPT
We ship a hosted MCP server at docs.reap.video/mcp. Add one URL to your MCP client and reap exposes 10 tools any LLM can call through natural conversation.
Example prompt:
“Clip today’s episode, add Hinglish captions, apply our podcast template, and queue it for tomorrow at 9am.”
Done in one message.
What we’d love from you today:
→ Try autonomous mode. Point reap at a YouTube channel or podcast feed. Come back and tell us how many clips you would actually post.
→ If you use Claude, Cursor, or ChatGPT, add our MCP server and run the agent through a prompt. Best prompt posted in the comments today wins a year of Studio, worth $540.
→ Be brutal. The 5% of clips that are not usable matter most. Tell us what went wrong so we can make the next release better.
Which platforms can it post to today, and does it support drafts so you can approve before it goes live?