Aleksandar Blazhev

Intent - Describe a feature and AI agents build, verify, and ship it

Intent is a developer workspace built for agent-driven development. Define a feature as a spec, and a team of agents coordinates the work (from implementation to verification) inside an isolated workspace with built-in code, terminal, and git.

Add a comment

Replies

Best
Maxime Beaudoin

I've been using Augment Code on a large SaaS application with a strict domain-driven architecture, multi-tenancy, and dozens of interconnected domains. Most AI coding tools struggle with this kind

of complexity. Intent make it simple to launch coordinate agent to develop new feature in parallel!

Jason Simard

@maxime_beaudoin thanks a lot for this feedback! We have more to show soon :)

Deej Tulleken

When the flagon of GPT-5.4 was flowing freely, I gave Intent a spin and was deeply impressed. However, in some ways my view has reversed. I ran several projects in parallel with a lot of attention and, uh...intent, and I found the agent roles performed probably better than any other multi-agent harness, skill framework, orchestrator, or whatever other wrapper for the same basic proposition is being peddled around right now. That's not gut feel. I maintain a list of tools in this space (GUI-based agent orchestration) that— excluding the really held-together-by-shoestring, vibe-coded efforts— sits at 70+ examples at the moment.

For reasons of I'm Not Wealthy, I have recently upgraded to the Codex Pro plan, which means I'm not switching between Sonnet/Opus and whatever Chinese model I decide to hammer for cost:rates balance the moment Sonnet/Opus runs out or goes down. What that means is I'm back on the model I was using exclusively with Intent, but I'm not using it with Intent and I have some thoughts.

1) Intent's agents are reliable and persistent. Give them the task and go do something. It's fine. If it's a big task, they will persevere. If you use Antigravity, you will absolutely, without a shadow of a doubt, go through the inconvenience of setting up the least intuitive yolo mode in any piece of software right now, because you will be assaulted with permission prompts every 27 seconds if you do not. Intent, once you set your desired permissions, can just get on with what it needs to. If I approved a big enough plan, and was explicit about how many waves to run through, it honestly felt more like using Hermes to me in some ways than most coding agents (without setting up a Ralph Loop or similar).

2) When given UI-agnostic prompting, each of the 6 projects I ran in parallel, even on fundamentally different frameworks, delivered a consistently styled frontend, and it was ugly as all hell and its layout and user-facing content was not made for human beings. That's a prompting issue, obviously, but something to be aware of when I don't consider this to be a model issue (other harnesses have been knocking UI out the park for me with GPT-5.4). I'd imagine Opus would probably do a lot better, but I would be tempted therefore to run the same prompts in Intent/CC/OC/whatever to check this. The layouts were so bad they honestly created a lot of extra work for me.

3) There's whispers on the wind/subreddits and a growing body of literature that posits giving agents human roles, layering certain language of that nature into skill.md files or just your regular prompts, and generally anthropomorphising agents has a detrimental effect on their effectiveness in a way that did not used to be the case. The models are now pretty damn good and don't need that stuff.

I've seen the internal prompts from the Claude Code leak (Anthropic is life-coaching their own models), so what do I know, but, dear God, Intent was slow. Not in, like, a tok/ps manner. The agents are available to monitor and interrogate fully. Again, Intent has one of the best experiences for keeping this as [in]visible as is right for you due to a great interface. There is a constant array of spinners and streaming text showing all The Activity, but going through the app's internal chain of incredibly well-constructed agent governance is like amping up Qwen's thinking mode to 11 (a slight exaggeration, since I've received a 6-minute thinking process from Qwen-3.5 before delivering an answer to the prompt 'Hi' on a model that I run at over 100tps).

I'm currently getting much more streamlined execution from Codex with no agent frameworks, no Oh-My-Anythings, Superpowers or personal stable of agents. This is what makes me much more on the fence about recommending Intent to essentially everyone, as I was previously. However, I would hazard that (and this sits nicely with where the product is likely being aimed) this virtuous agentic cycle and internal QA-ing before reaching the human in the loop will sit nicely for enterprise customers. It feels like there is more demonstrable diligence happening in front of your eyes. If your employer is running Intent, you also aren't worrying about the cost of that diligence, since running multiple agents is expensive enough for private individuals without also worrying if they're being too 'conscientious' about their work.

I know Augment is using Opus 4.7 as the default model now, so this isn't my view on how Intent guides a particular model. It's a a warning that regular users might want to consider whether multi-agent, parallel workflows are actually the right move for them, regardless of cost.

4) Yes, I'm still bulleting here. The prioritisation and delegation of agents across different tasks is superb. Every tool like this is leveraging worktrees now, but Intent is the only one that I never had to go in to examine merge conflicts, feature collision and the like.

5) There's many nice touches that are worth exploring so I'd encourage you to give Intent a try, even if you think running a bunch of agents isn't for you. You really don't have to think about it in Intent. The living spec is so nicely done. If you've experimented with a bunch of context/memory systems like I have, this is the most sophisticated version of the simplest delivery for this challenge (basically something like a md updating itself as it goes along) due to its consistency and UI.

Wow, I just came here to write "Intent is really good". What happened?

Jason Simard

@deejtulleken Thanks for such a detailed and honest write-up! This is extremely helpful.

Very short version of how we see it:

  1. Reliability & permissions
    What you liked here is exactly by design: explicit plan + approvals up front, so agents can run long tasks without constantly interrupting you. That’s the “enterprise-grade diligence” we’re aiming for.

  2. Ugly/inefficient UIs
    Have you tried the "UI Designer" agent yet? If not, it’s worth a shot, we’d love to hear if you get better results with it.

  3. Human-like roles, slowness, and overhead
    We agree that today’s models often don’t need as much anthropomorphic ceremony, and that layered governance can add latency. Intent intentionally leans into internal QA and multi-agent coordination to get better results. We’re continuously working to improve speed, but never at the expense of the quality bar we’re aiming for.

  4. Delegation & worktrees
    We’re really glad you called this out. Robust task delegation and conflict-free worktrees are exactly where we’ve invested a lot, so it’s validating to hear that this part “just worked” for you.

  5. Living spec & context
    The living spec is meant to be that simple, inspectable “single source of truth” you described. Your reaction here is very aligned with what we’re doubling down on.

One last note: if you want to save time and tokens on smaller tasks, you can skip the orchestrator and use Developer Mode directly, which is essentially the raw access to providers. You can still spawn more agents, but you’ll avoid the orchestration overhead.

Deej Tulleken

@jaysym I'm definitely planning to revisit Intent. The UI Designer agent, I specifically noticed, seemed to rarely be self-invoked in the workflows I was running. Sometimes this wasn't surprising, sometimes it was. Again, this may be down to my planning and prompting. I am still fairly unimpressed with vibe-coded frontends anywhere I see them and I've experimented with a bunch of different solutions in this space, going back to Kombai and the Figma MCP, and now to Pencil, Paper, V0, Variant, Stitch, etc. The 'look' was not as problematic with Intent as the layout structures I was seeing... maybe something to do with not getting that UI Designer pass when it should have.

I threw £100ish into my account for the rest of the month once the free GPT ended, and Intent felt a bit budget-hungry for me to use it any more at that time. Certainly not more than other tool I've used with frontier models running multiple agents, though. Basically I felt like it warranted more spend to get what I wanted from it, but I'm juggling budget for other things I want to try too, and am sadly not that guy with 12 Claude Max accounts, being between jobs and having a wife that really doesn't want me to talk about AI any more than I already do :)

I saw Developer Mode, but as someone who switched my team from Cursor to Augment already a while back, I was specifically interested in testing Intent's orchestration capabilities at the time. On that topic I was pleased that when I ran out of budget you make it easy to BYOM. The execution flow of Intent is such that I didn't actually want to move those projects to another tool at the time. Setting up a bunch of agents with a range of different models is easy, however there were a few hiccups that ultimately made me abandon Intent for the time being. Newly-created agents didn't always respect when there was a model override for a specific agent. I speculated that this was down to the models not playing as well in the tool than your first-class supported models.

Again, this feedback is a few weeks old, which is ancient when discussing AI toolsets, but one thing I found was that the default orchestrator agent works extremely well with frontier models as well as upper-tier open weight models. What doesn't work well in either instance is if you try to have one orchestrator pick up where another one left off because you want to switch models, which you can't (or at least couldn't) do once the agent has been initiated.

Nika

This is a complex thing. Who is the main target audience?

Jason Simard

@busmark_w_nika The main target audience for Intent is senior, power-user developers, company with very large codebase and engineering teams who are actively using or motivated to adopt multiple AI coding agents, and who feel the pain of juggling terminals, IDEs, repos, and prompts to ship production code.

In practice, this skews toward ICs and tech leads at high-caliber software companies who want a serious, orchestrated agent workspace.

Nikita Savchenko

Congrats on the launch! Wondering what’s its integration capabilities with many common SDLC software, because building in isolation is great until you need to do some real work.

Jason Simard

@nikitaeverywhere Intent doesn’t try to replace your SDLC stack, it plugs into it and gives you a unified workspace on top:

  • Git-native workspaces: Every Intent project runs in an isolated git worktree with full git workflow support (branches, commits, PRs, merge flow). You go from prompt → commit → PR → merged without leaving Intent.

  • You can connect all your MCPs in these workspaces just like you would in an IDE

  • Bring-your-own agents & tools: Intent works with multiple agent providers (Claude Code, Codex, OpenCode, Augment’s own agents), so it can sit alongside existing IDEs and CI/CD instead of locking you in.

  • Workspace, not a toy sandbox: Because it’s built around git and a real terminal, the code, tests, and scripts agents run in Intent are the same ones your SDLC uses no “demo-only” environment.

Net: Intent is designed for “real work” in production repos, integrating with your existing git/PR-centric SDLC rather than a sealed-off playground.

Aron

So does this work with multiple repos? Like old legacy code or needs a monorepo to work well?

Jason Simard

@0xaron It works one repo per workspace today, no true multi-repo/monorepo workspace support yet.

You don’t need a monorepo, but each workspace has to point at a single git repo, legacy code is fine as long as it’s in that repo.

We are working to make that easier in the future.

Pranav Prakash
Any oss repo of work done by intent? Or any PRs on existing oss repos we can refer to? What kind of token usage can we expect as compared to similar setup in cursor/cc or compare to human orchestrator.
Jason Simard

@pranavprakash Thanks for your interest. I’ll try to answer as best I can, but it’s very hard to be precise about token usage because it really depends on the size of your project, the size of your task, your prompt, rules, MCP integrations, and many other factors that influence the overall cost. To help tackle this question, I created a thread in our subreddit explaining how people can save tokens using Intent: https://www.reddit.com/r/AugmentCodeAI/comments/1r6ckev/intent_cost_tips_and_tricks/ hope you enjoy the read.

For OSS repositories built with Intent, this is a really good question. Since the product is still pretty new and all usage is encrypted, we can’t know with certainty who has done what with it. Also, to respect privacy, we don’t disclose which projects are using it. What I can do, however, is show you a personal project I’m working on that was 100% made with Intent: https://github.com/GetWiredDev/getwired

If you have any more questions, feel free to add me so we can have a good talk together.

Mate Ajduković

Congrats on the launch, looks great! As an Augment Code user spending most of the time in Auggie CLI, just wanted to check is there a timeline when this would be available to Linux users, or are there any plans for it?

Jason Simard

@mate_ajdukovic Unfortunately, we don’t yet have a timeline for the Windows and Linux versions.

Daniel Beuter

This looks very promising! Unfortunately, I can't test it on Windows yet.

I've been working with Augment in a WebStorm environment for over a year and I'm very happy with it.

However, I have two concerns regarding this next step:

a) How high will the token consumption be? I'm already using up my developer token allowance manually quite a bit. I usually have to top it up several times a month. If I imagine multiple agents working in parallel, orchestrated by even more agents, my token pool will be empty in just a few hours...?

b) I already have to closely monitor/review the activity of my one integrated agent and guide it in the right direction. Here, too, I see the risk that my incomplete/liquid spec will lead to absurdly high token consumption.

So: I think the idea is great, and I also think it will work very well.

But: Is it still affordable?

Jason Simard

@daniel_beuter Thanks for this question. You’re right that using coordinators and subagents can introduce some overhead at first glance and use more tokens. That’s why I prepared a detailed post on how to save as many tokens as possible using Intent! It should really help you understand what kind of workflow you might want to use. Here is the post: https://www.reddit.com/r/AugmentCodeAI/comments/1r6ckev/intent_cost_tips_and_tricks/

Another important point is that when you use coordinators + subagents + a verifier, your first prompt will indeed cost more, but it can save you time and reduce the need for reprompts (so overall you’re saving both time and tokens by not having to ask again). Our verifier agents are there to make sure everything is handled correctly on the first try. Nothing is perfect, but we’ve seen better results with this approach internally.

Sounak Bhattacharya

"Team of agents" is the interesting part here — what does coordination actually look like between them? Like if one agent writes the implementation and another is verifying it, what happens when the verifier catches something that requires a non-trivial architectural change? Does it loop back automatically or does that surface to the human?

Jason Simard

@sounak_bhattacharya In Intent, the agents all work off a single shared spec inside a workspace. The Verifier’s findings feed back into that loop: for normal issues it just triggers more work by the Implementers. When it discovers something that implies a spec/architecture change, the Coordinator proposes an updated spec and that’s surfaced to the human for approval before the agents run again.

Griffin Payne

I've been using Intent since the launch and it's fantastic at large scale objectives. I still use Augment's native VSCode plugin for odds and ends, but if I have a big task that requires changes across dozens of files and context from multiple repos, Intent is my weapon of choice. Augments team is insanely responsive if there's ever an issue and you will see updates pop up hours after bringing up any concerns. I've been with Augment from the start, and don't foresee anything surpassing it's capabilities anytime soon.

Jason Simard

@_mrpayne_ Thank you very much for the feedback, We really appreciate it. More to come!