sync-3 is a 16B parameter AI lip sync model that doesn't just move lips, it understands performances. Built on a global understanding of a person across an entire shot, it generates all frames at once instead of stitching isolated snippets. It handles what breaks every other model: close-ups, occlusions, extreme angles, low lighting - all while preserving the emotion of the original performance across 95+ languages in full 4K. Try it out at sync.so, via API, or in Adobe Premiere.
Hey Product Hunt! Kalyan here, head of content and marketing at sync.
We've been building AI lipsync for a while now, and today we're launching sync-3, our most advanced model release ever.
Here's the short version: previous lipsync models (including our own) processed video in small, isolated chunks and stitched them together. sync-3 takes a fundamentally different approach. It builds a global understanding of a person across an entire shot and generates all frames at once. The result is consistency and realism that closes the gap between real footage and dubbed footage.
A few things sync-3 handles that nothing else does well:
- Close-ups and partial faces (the full face doesn't need to be visible) - Extreme angles including side profiles, over-the-shoulder, non-frontal - Obstructions like hands, mics, scarves - detected and handled automatically - Speaker style and emotion are preserved, not flattened
- Low lighting and varied lighting scenarios
It's 32x larger than our previous model (16B vs 400M parameters), supports 95+ languages, and outputs in 4K.
You can use it right now at sync.so, through our Adobe Premiere plugin, or via API.
We think of this as the leap from perfecting lip sync to unlocking facial reanimation, the model doesn't just match mouths, it understands performances.
Would love for you to try it and let us know what you think. We're here all day answering questions.
Are there any issues with smaller languages? For example, Danish? Usually there’s enough training data for popular languages, but not so much for smaller ones.
Hey Product Hunt!
Super happy with the launch, sync-3 is much more powerful than any previous models we've released, my favorite feature is how you can upload a video with the lips closed and have that be lipsynced without issue and with the highest of quality.
We want you to be able to try it so if you sign up with code SYNC3LAUNCH, you get a free month on the Creator plan and $25 in credits.
Can't wait to see what you create!
95 language in 4K is wild .Feels like this could seriously change dubbing workflows if quality is production ready and not just demo-level.
How are you handling edge cases where emotion and lip movement don’t quite align across languages, especially with big differences in sentence structure?
About sync-3 on Product Hunt
“Studio-grade AI lip sync and visual dubbing”
sync-3 launched on Product Hunt on April 7th, 2026 and earned 132 upvotes and 13 comments, placing #11 on the daily leaderboard. sync-3 is a 16B parameter AI lip sync model that doesn't just move lips, it understands performances. Built on a global understanding of a person across an entire shot, it generates all frames at once instead of stitching isolated snippets. It handles what breaks every other model: close-ups, occlusions, extreme angles, low lighting - all while preserving the emotion of the original performance across 95+ languages in full 4K. Try it out at sync.so, via API, or in Adobe Premiere.
sync-3 was featured in Movies (15k followers), Artificial Intelligence (466.4k followers) and Video (1.8k followers) on Product Hunt. Together, these topics include over 93.8k products, making this a competitive space to launch in.
Who hunted sync-3?
sync-3 was hunted by Kalyan Mada. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how sync-3 stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt! Kalyan here, head of content and marketing at sync.
We've been building AI lipsync for a while now, and today we're launching sync-3, our most advanced model release ever.
Here's the short version: previous lipsync models (including our own) processed video in small, isolated chunks and stitched them together. sync-3 takes a fundamentally different approach. It builds a global understanding of a person across an entire shot and generates all frames at once. The result is consistency and realism that closes the gap between real footage and dubbed footage.
A few things sync-3 handles that nothing else does well:
- Close-ups and partial faces (the full face doesn't need to be visible)
- Extreme angles including side profiles, over-the-shoulder, non-frontal
- Obstructions like hands, mics, scarves - detected and handled automatically
- Speaker style and emotion are preserved, not flattened
- Low lighting and varied lighting scenarios
It's 32x larger than our previous model (16B vs 400M parameters), supports 95+ languages, and outputs in 4K.
You can use it right now at sync.so, through our Adobe Premiere plugin, or via API.
We think of this as the leap from perfecting lip sync to unlocking facial reanimation, the model doesn't just match mouths, it understands performances.
Would love for you to try it and let us know what you think. We're here all day answering questions.