Viral Twin CLI
Clone viral TikToks with AI — your brand, their psychology
10 followers
Clone viral TikToks with AI — your brand, their psychology
10 followers
I got nerdsniped by a question: can AI understand WHY certain videos go viral? Viral Twin analyzes hooks, pacing, and energy curves — then generates new videos with your brand. 7 AI models, one CLI. Built it for fun, turned out pretty cool.






Budget Vision | Budgeting made right
Hey PH!
I got nerdsniped by viral videos. Started wondering if there's a pattern to what makes some videos hit millions while similar ones flop.
Turns out: there totally is. The hook timing. The pacing curve. The energy build. The visual rhythm.
So I built a tool that:
1. Analyzes these patterns with Gemini
2. Regenerates videos with your brand using Kling/Sora/etc
The engineering rabbit hole went deep:
- Chaining 5 different AI models
- Two-phase analysis for cost optimization
- Building a terminal UI with Ink
- Graceful failure handling (AI video fails a lot)
Selling for $49 because I put months into this and don't want to give it away. Not trying to get rich — just think it's a cool piece of engineering and maybe someone else will too.
Happy to answer questions about the architecture or how any piece works.
@malakhov_da Hey there, congrats on your launch, just upvoted for you, love the idea!
I might be missing something, but I’m having trouble opening or accessing the Privacy Policy / Terms, are they available somewhere else?
Budget Vision | Budgeting made right
The rabbit hole
I kept watching viral TikToks and wondering: what makes THIS one hit 10M views while similar ones die at 500?
So I built a tool to find out.
Viral Twin uses Gemini 2.5 Flash to analyze videos — not just what's in them, but the structure: the hook timing, pacing curves, energy patterns, visual rhythm. The stuff that makes your thumb stop scrolling.
Then it regenerates videos using that same structure, but with your brand.
How it works
1. Discover — Scans TikTok trends using keywords + viral scoring (views/followers ratio)
2. Analyze — AI extracts the "viral DNA" (two-phase: raw forensics cached, brand styling per-clone)
3. Synthesize — Pick from 7 video models (Kling, Hailuo, Wan, Sora 2). Add narration + music.
4. Brand — Reusable Avatars, Scenes, Archetypes for consistency across videos
Tech I'm proud of
- Chained synthesis — Each segment's last frame seeds the next. Way better visual continuity than parallel generation.
- Two-phase caching — Analyze a video once, style it for multiple brands without re-analyzing. Saves ~60% on API costs.
- Graceful degradation — Failed segments don't crash the pipeline. Costs tracked even on partial runs.
- Terminal UI — Built with Ink (React for CLI). Keyboard navigation, real-time progress, the works.
Honest limitations
- AI video still looks AI-ish sometimes
- Works best with simple, structured formats
- CLI only — if you want a web dashboard, this isn't it
Pricing
$49 for GitHub access. I spent months on this and didn't want to give it away for free. You also need your own API keys (Gemini, FAL, ElevenLabs).
If you're into AI video pipelines or just curious about the architecture — check it out.