Adject 2.0 - Create hyperrealistic product visuals with AI
by•
Adject 2.0 is an agentic product studio where brands can create, edit, and iterate product visuals inside an infinite creative workflow.
Instead of isolated generations, products, models, edits, videos, and assets stay connected inside projects and evolve continuously over time.
Upload once, generate in context, iterate visually, and build complete campaigns without fragmented tools or repetitive prompting.


Replies
Adject
Mailwarm
Interesting shift from one-off generations to a continuous creative workflow.
Keeping assets, edits, and iterations connected inside projects solves a real pain for teams. This could meaningfully streamline how campaigns are built.
How you handle consistency across iterations (lighting, textures, brand identity) and how collaborative workflows are managed?
Adject
@thamibenjelloun Really appreciate your comment 🙌
That’s exactly the problem we wanted to solve with 2.0, creative work constantly loses context between generations, edits, and tools.
I mean for consistency, we’re approaching it through persistent project context, reusable assets, and canvas-aware generation so the system understands what already exists instead of generating in isolation every time.
Collaboration workflows are also a big future direction for us moving forward as we keep expanding the project and workspace layer.
Thanks again for checking it out ❤️
Adject
@thamibenjelloun Yeah, exactly. That's why we switched to canvas style. It's like if Figma and Cursor had a baby, but for brands.
Adject
Tried a lot of AI design tools before but most of them feel like one-time prompt generators tbh. You make something nice, then the whole workflow resets again
Adject 2.0 feels much smoother. Being able to keep everything in one infinite workspace and continue iterating without constantly re-uploading or restarting honestly makes a huge difference.
One of the few AI creative tools that actually feels built for real workflows.
Would really love to hear your thoughts and feedback, especially from people interested in the future of creative AI tools.
Adject
Hello Product Hunt community! 😸
Bringing Adject v2 to life has been an amazing journey. As the team building the AI infrastructure, our biggest challenge (and most fun task) was setting up the new chatting logic. Getting the agent architectures to communicate perfectly to provide a smooth user experience took a lot of late nights, but we are really proud of the result.
Please give it a spin and let us know where we can improve. I'm looking forward to reading your feedback and answering your questions!
This is the one worth attention today in my opinion. The generate/download/repeat loop in most AI image tools kills any sense of creative momentum. So, keeping products/models/edits all connected in one place is how creative teams actually think about campaigns. Wonder how you guys handle products with complex textures like fabric or reflective packaging?
Adject
@artstavenka1 Really appreciate your comment, Art!
And yes, fabrics, reflective packaging, detailed textures etc. are things we focused on heavily while building the system. We’re getting very high-quality and consistent outputs even on difficult product types.
A huge advantage of keeping everything connected inside the same workspace is that products, edits, and generations preserve context much better instead of restarting from scratch every time, definitely.
Thanks a lot for checking it out ❤️
Adject
I'm one of the people on the team behind Adject, and we spent a lot of time wrestling with the technical side on this release 😄
Going from 1.5 to 2.0, we didn't really add a feature, we changed the mental model. In v1, every generation started from zero. Prompt, download, start over. It worked, but every campaign meant rebuilding the context all over again.
In 2.0, products, models, edits, and assets all live connected to each other. The agent doesn't work inside a single box anymore, it sees the whole workspace. We had to seriously rethink how state and context move through the system.
Big thanks to everyone on the team, it was great to ship something like this together. Happy to chat with anyone who has questions about the architecture or the agentic workflow side.
Love the upload once idea. It really solves the annoying problem of doing the same thing again and again.
Adject
@zijian Yep, definitely. Why would someone upload the same product again and again? It is unproductive. That's why you upload your products, then use them limitlessly whenever you want.
This actually looks super useful for ecommerce 😭 Upload once and keep iterating different concepts without reprompting every time. Feels way more practical for brands/content teams than most AI image tools.
Adject
@le_ng_c_dan_nhi Hahaha yes 😂
We talked with brands & content teams while building this and almost everyone had the same frustration with existing workflows.
Really appreciate the support ❤️
How does it handle brand guidelines? Can you lock certain elements so the AI doesn't drift from them? Congrats on the launch!
Adject
@jared_salois It handles brand consistency really well. Products and assets inside the workspace act as persistent references when you use them, so the AI keeps outputs aligned instead of drifting between generations.
The interesting thing about hyperrealistic AI product visuals isn't just the static output — it's that the same generation stack pushes "AI-as-content-source" past product photography into narrative-shaped applications: walking tours, location-based stories, scene-building. We've been playing with this on StoryRoute (interactive travel narratives that adapt to where you are in a city), and the bottleneck has shifted from generation quality to *grounding* — making sure the AI doesn't invent a building that isn't actually on that corner. Curious how Adject handles the grounding problem for product context — is there a reference-image pipeline that prevents drift from the real product geometry?