
Mosaic
Zapier for Video Editing
91 followers
Zapier for Video Editing
91 followers
Mosaic allows you to automate any video edit — from Rough Cuts to Motion Graphics and anything in between. Our node-based canvas is an interface to setup video editing workflows that scale. Once created, these can be reused as templates or triggered programmatically via API or event-based triggers. From any step along the way, seamlessly export your timeline back into traditional tools like Premiere Pro / Final Cut / DaVinci Resolve or to popular Media Asset Management softwares.
This is the 2nd launch from Mosaic. View more
Mosaic
Launching today
Mosaic allows you to automate any video edit — from Rough Cuts to Motion Graphics and anything in between. Our node-based canvas is an interface to setup video editing workflows that scale. Once created, these can be reused as templates or triggered programmatically via API or event-based triggers. From any step along the way, seamlessly export your timeline back into traditional tools like Premiere Pro / Final Cut / DaVinci Resolve or to popular Media Asset Management softwares.









Payment Required
Launch Team / Built With




Mosaic
Hey ProductHunt!
I'm Adish, one of the co-founders of Mosaic (https://mosaic.so). Mosaic lets you create and run your own multimodal video editing agents in a node-based canvas. It’s different from traditional video editing tools in two ways: (1) the user interface and (2) the visual intelligence built into our agent.
While most AI video editors today are attempts at retrofitting existing timeline editors with a chat copilot, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant.
The result: a node-based canvas where you can create and run your own agentic video editing workflows. This paradigm shift redefines what it means to be a "non-linear editor" and offers a scalable content engine that allows you to define workflows that can be reused as templates or triggered programmatically via API or event-based triggers.
Each node in the canvas represents a video editing operation and is configurable with natural language prompts, so you still have creative control. You can also branch to run edits in parallel, creating multiple variants from the same raw footage to A/B test different prompts, models, and workflows. In the canvas, you can see inline how your content evolves as the agent goes through each step.
The idea is that canvas will run your video editing on autopilot, and get you 80-90% of the way there. Then you can adjust and modify at a more granular level in an inline timeline editor. We also support exporting your timeline state as an XML back out to traditional editing tools like DaVinci Resolve, Adobe Premiere Pro, and Final Cut Pro or to popular Media Asset Management softwares.
Our use of multimodal AI to build visual understanding and intelligence is a core platform feature. This gives our system a deep understanding of video concepts, emotions, actions, spoken word, light levels, shot types. We’re supplementing this with our own computer vision + video processing pipeline, which includes techniques like saliency analysis, audio analysis, and determining objects of significance—all to help guide the best edit.
These are things that we as human editors internalize so deeply we may not think twice about it, but reverse-engineering the process to build it into the AI agent has been an interesting challenge.
Use cases for editing include:
1. Removing bad takes or creating script-based cuts from videos / talking-heads
2. Repurposing longer-form videos into clips, shorts, and reels (e.g. podcasts, webinars, interviews)
3. Creating sizzle reels or montages from one or many input videos
4. Creating assembly edits and rough cuts from one or many input videos
5. A/B testing different hook, CTA permutations and variants
6. Optimizing content for various social media platforms (reframing, captions, etc.)
7. Dubbing content with voice cloning and lip syncing
8. Generating *editable* motion graphic animations or cinematic captions
We also support generative workflows such as:
1. Creating new AI Avatar / UGC content
2. Creating new cartoon / animated content
3. Adding contextual AI-generated B-Rolls to existing content
4. Modifying existing video footage (e.g. censoring content, changing lighting, applying VFX)
We're giving everyone in the ProductHunt community a 20% discount if you sign up during our launch week! You can try it today at https://edit.mosaic.so and our API and educational docs are at https://docs.mosaic.so/. We’d love to hear your feedback!
minimalist phone: creating folders
The way we will create videos has totally changed. The fact that if I wanted to create 2 different variations of the video (which also incorporated time for moodboarding and different script logic) – it took days, and now, one single tool can manage in the blink of an eye... that's crazy.
Mosaic
@busmark_w_nika the inherent branching nature that is available in the canvas allows you to test multiple cutdowns / script scenarios / prompts simultaneously. The interface + the underlying AI technology allows you to accelerate your video editing from hours to literally seconds.
The node-based canvas is the right interface for this. Chat-based video editing works for simple one-shot tasks but falls apart the moment you have a repeatable workflow with multiple steps, branching variants and brand constraints you need to apply consistently across projects.
The A/B testing of hook and CTA permutations from the same raw footage is the use case that jumps out to me. That alone could change how content teams approach high-volume social production.
As a motion designer and Creative Director who works with brand video regularly, the "80-90% of the way there, then you refine" model is how I'd actually want to use this. The XML export back to Premiere, Final Cut and DaVinci is also what makes this feel safe to adopt rather than a walled garden. Curious how the motion graphics node handles brand system constraints, can you feed it a style guide or does it work purely from prompt? Congrats on the launch!
@adishj This is lovely! Can you also include color grading styles and preferences?
As someone who has edited videos for hours and hours, this product cannot be overstated! Congrats @adishj I would love to add this to my video toolkit.