Launching today

ChatGPT Images 2.0
First image model with thinking capabilities
122 followers
First image model with thinking capabilities
122 followers
AI image generation with a thinking layer. Create, refine, and validate visuals in one flow. Supports flexible aspect ratios and multiple outputs per prompt, making it easier to go from idea to production-ready assets fast.




Excited to hunt ChatGPT Images 2.0 by OpenAI today.
This is an image model that doesn’t just generate visuals, but it thinks through them.
Instead of prompting and hoping for the right output, the model can reason, iterate, and validate before delivering the final result.
This adds up to:
• Images that align better with intent, not just prompts
• Multiple distinct outputs from a single idea
• Real-world formats (from wide banners to vertical posters)
The biggest shift here is the thinking layer. This moves image generation from a creative shortcut into a true workflow tool.
If you’ve been using AI images but still fixing outputs manually after, this is definitely worth a look.
@byalexai How does it handle brand consistency across a content calendar? Like if I'm generating LinkedIn carousels + PH thumbnails + IG Reels for a personal branding campaign, can it reference a style guide (colors, fonts) and maintain that voice across 10+ assets from one prompt?
@sama Been using image tools for a while and the biggest pain is still the basics not holding up.
Tried a simple case last week, a set of LinkedIn style posts for the same brand. Same prompt, same idea. Ended up with different fonts in every image, spacing all over the place, text slightly warped, and layouts shifting for no reason. Another one was a landing page mock, buttons looked fine in one image, completely off in the next, alignment broken and icons distorted.
If this actually solves that kind of stuff then that is the real value. Can it generate 8 to 10 assets that actually look like they belong to the same brand without fixing everything manually after? Or is it still generate, fix, regenerate until something usable comes out.
Would be good to see real outputs for these cases, not just one clean example
Excited to try this out - been relying on Google's models for a while but it would be nice to spend all my money in one place again 👀 👀
Just tried this out. Truly beautiful results