All activity
sync-3 is a 16B parameter AI lip sync model that doesn't just move lips, it understands performances. Built on a global understanding of a person across an entire shot, it generates all frames at once instead of stitching isolated snippets. It handles what breaks every other model: close-ups, occlusions, extreme angles, low lighting - all while preserving the emotion of the original performance across 95+ languages in full 4K. Try it out at sync.so, via API, or in Adobe Premiere.

sync-3Studio-grade AI lip sync and visual dubbing
Kalyan Madaleft a comment
Hey Product Hunt! Kalyan here, head of content and marketing at sync. We've been building AI lipsync for a while now, and today we're launching sync-3, our most advanced model release ever. Here's the short version: previous lipsync models (including our own) processed video in small, isolated chunks and stitched them together. sync-3 takes a fundamentally different approach. It builds a global...

sync-3Studio-grade AI lip sync and visual dubbing

