
Wan
Video & Image Generation Model from Alibaba Tongyi Lab
375 followers
Video & Image Generation Model from Alibaba Tongyi Lab
375 followers
Developed by Alibaba, the Wan series is a family of advanced foundational models engineered for high-fidelity visual generation.
This is the 4th launch from Wan. View more
Wan 2.7-Image
Launching today
Wan 2.7-Image by Alibaba brings unprecedented control to AI generation. It features interactive pixel-level editing (move, resize, edit text) and generates up to 12 highly consistent sequential images from a single prompt. Available via Web and API.














Free Options
Launch Team



Flowtica Scribe
Hi everyone!
Wan 2.7 is very control-focused.
The new interactive editing lets you point, select, and modify specific regions—like moving an object or fixing a typo—without breaking the rest of the image.
Plus, generating up to 12 consistent images in one go is a huge unlock if you ever need to build storyboards or sequential assets. It even handles long-form text rendering across 12 languages natively.
It is available now via the web app and API.
The pixel-level editing is what really stands out here.
A lot of image tools still feel like “prompt and hope,” so having direct control over elements could change how people iterate.
How granular does the editing actually get when working on complex scenes?
The 12 consistent sequential images from a single prompt is the interesting part. How does "consistency" actually hold across all 12 — does it lock character appearance, lighting, and style simultaneously, or is consistency more about one of those dimensions at a time? Because storyboarding breaks down fast if a character's face drifts between panels 3 and 9.