I tried Adject today and was quite impressed.
I uploaded a simple product photo I had, and it produced professional-looking images with different backgrounds. What surprised me most was how natural the logo and shadows looked—AI usually fails in these areas.
It also has a video feature that creates short 5-second videos. It's ideal for Instagram stories, but there aren't many template options, so you have to experiment a bit yourself.
It's very easy to use; even someone like me who doesn't understand design could handle it comfortably. Since it works on a project basis, everything stays organized.
The downsides: The template library is currently empty, and you can't process 50 products at once. But it does the job well—it's fast, cheap, and effective.
I will definitely recommend it to my friends who do e-commerce in our entrepreneurship programs. It makes much more sense than renting a studio and paying a photographer.
Adject
This is the one worth attention today in my opinion. The generate/download/repeat loop in most AI image tools kills any sense of creative momentum. So, keeping products/models/edits all connected in one place is how creative teams actually think about campaigns. Wonder how you guys handle products with complex textures like fabric or reflective packaging?
Adject
@artstavenka1 Really appreciate your comment, Art!
And yes, fabrics, reflective packaging, detailed textures etc. are things we focused on heavily while building the system. We’re getting very high-quality and consistent outputs even on difficult product types.
A huge advantage of keeping everything connected inside the same workspace is that products, edits, and generations preserve context much better instead of restarting from scratch every time, definitely.
Thanks a lot for checking it out ❤️
How does it handle brand guidelines? Can you lock certain elements so the AI doesn't drift from them? Congrats on the launch!
Adject
@jared_salois It handles brand consistency really well. Products and assets inside the workspace act as persistent references when you use them, so the AI keeps outputs aligned instead of drifting between generations.
Mailwarm
Interesting shift from one-off generations to a continuous creative workflow.
Keeping assets, edits, and iterations connected inside projects solves a real pain for teams. This could meaningfully streamline how campaigns are built.
How you handle consistency across iterations (lighting, textures, brand identity) and how collaborative workflows are managed?
Adject
@thamibenjelloun Really appreciate your comment 🙌
That’s exactly the problem we wanted to solve with 2.0, creative work constantly loses context between generations, edits, and tools.
I mean for consistency, we’re approaching it through persistent project context, reusable assets, and canvas-aware generation so the system understands what already exists instead of generating in isolation every time.
Collaboration workflows are also a big future direction for us moving forward as we keep expanding the project and workspace layer.
Thanks again for checking it out ❤️
Adject
@thamibenjelloun Yeah, exactly. That's why we switched to canvas style. It's like if Figma and Cursor had a baby, but for brands.
Love the upload once idea. It really solves the annoying problem of doing the same thing again and again.
Adject
@zijian Yep, definitely. Why would someone upload the same product again and again? It is unproductive. That's why you upload your products, then use them limitlessly whenever you want.
This actually looks super useful for ecommerce 😭 Upload once and keep iterating different concepts without reprompting every time. Feels way more practical for brands/content teams than most AI image tools.
Adject
@le_ng_c_dan_nhi Hahaha yes 😂
We talked with brands & content teams while building this and almost everyone had the same frustration with existing workflows.
Really appreciate the support ❤️
Adject
I'm one of the people on the team behind Adject, and we spent a lot of time wrestling with the technical side on this release 😄
Going from 1.5 to 2.0, we didn't really add a feature, we changed the mental model. In v1, every generation started from zero. Prompt, download, start over. It worked, but every campaign meant rebuilding the context all over again.
In 2.0, products, models, edits, and assets all live connected to each other. The agent doesn't work inside a single box anymore, it sees the whole workspace. We had to seriously rethink how state and context move through the system.
Big thanks to everyone on the team, it was great to ship something like this together. Happy to chat with anyone who has questions about the architecture or the agentic workflow side.