Product Thumbnail

Intent

Describe a feature and AI agents build, verify, and ship it

Productivity
Developer Tools
Artificial Intelligence

Hunted byAleksandar BlazhevAleksandar Blazhev

Intent is a developer workspace built for agent-driven development. Define a feature as a spec, and a team of agents coordinates the work (from implementation to verification) inside an isolated workspace with built-in code, terminal, and git.

Top comment

Excited to hunt Intent by Augment Code today.

Intent is a developer workspace where agents coordinate and execute work end-to-end.

This isn’t a coding assistant. It’s an agent-driven development system.

Instead of prompting one agent at a time, you define a spec and a coordinator breaks it into tasks, delegating to specialists (implement, verify, debug, review) running in parallel.

This adds up to:
• Specs that stay alive as work progresses
• Built-in verification loops, not just code generation
• A full workspace (editor, terminal, git)

If you’ve been exploring agentic dev but didn’t want to build the orchestration layer yourself , this is definitely worth a look.

Comment highlights

When the flagon of GPT-5.4 was flowing freely, I gave Intent a spin and was deeply impressed. However, in some ways my view has reversed. I ran several projects in parallel with a lot of attention and, uh...intent, and I found the agent roles performed probably better than any other multi-agent harness, skill framework, orchestrator, or whatever other wrapper for the same basic proposition is being peddled around right now. That's not gut feel. I maintain a list of tools in this space (GUI-based agent orchestration) that— excluding the really held-together-by-shoestring, vibe-coded efforts— sits at 70+ examples at the moment.

For reasons of I'm Not Wealthy, I have recently upgraded to the Codex Pro plan, which means I'm not switching between Sonnet/Opus and whatever Chinese model I decide to hammer for cost:rates balance the moment Sonnet/Opus runs out or goes down. What that means is I'm back on the model I was using exclusively with Intent, but I'm not using it with Intent and I have some thoughts.

1) Intent's agents are reliable and persistent. Give them the task and go do something. It's fine. If it's a big task, they will persevere. If you use Antigravity, you will absolutely, without a shadow of a doubt, go through the inconvenience of setting up the least intuitive yolo mode in any piece of software right now, because you will be assaulted with permission prompts every 27 seconds if you do not. Intent, once you set your desired permissions, can just get on with what it needs to. If I approved a big enough plan, and was explicit about how many waves to run through, it honestly felt more like using Hermes to me in some ways than most coding agents (without setting up a Ralph Loop or similar).

2) When given UI-agnostic prompting, each of the 6 projects I ran in parallel, even on fundamentally different frameworks, delivered a consistently styled frontend, and it was ugly as all hell and its layout and user-facing content was not made for human beings. That's a prompting issue, obviously, but something to be aware of when I don't consider this to be a model issue (other harnesses have been knocking UI out the park for me with GPT-5.4). I'd imagine Opus would probably do a lot better, but I would be tempted therefore to run the same prompts in Intent/CC/OC/whatever to check this. The layouts were so bad they honestly created a lot of extra work for me.

3) There's whispers on the wind/subreddits and a growing body of literature that posits giving agents human roles, layering certain language of that nature into skill.md files or just your regular prompts, and generally anthropomorphising agents has a detrimental effect on their effectiveness in a way that did not used to be the case. The models are now pretty damn good and don't need that stuff.

I've seen the internal prompts from the Claude Code leak (Anthropic is life-coaching their own models), so what do I know, but, dear God, Intent was slow. Not in, like, a tok/ps manner. The agents are available to monitor and interrogate fully. Again, Intent has one of the best experiences for keeping this as [in]visible as is right for you due to a great interface. There is a constant array of spinners and streaming text showing all The Activity, but going through the app's internal chain of incredibly well-constructed agent governance is like amping up Qwen's thinking mode to 11 (a slight exaggeration, since I've received a 6-minute thinking process from Qwen-3.5 before delivering an answer to the prompt 'Hi' on a model that I run at over 100tps).

I'm currently getting much more streamlined execution from Codex with no agent frameworks, no Oh-My-Anythings, Superpowers or personal stable of agents. This is what makes me much more on the fence about recommending Intent to essentially everyone, as I was previously. However, I would hazard that (and this sits nicely with where the product is likely being aimed) this virtuous agentic cycle and internal QA-ing before reaching the human in the loop will sit nicely for enterprise customers. It feels like there is more demonstrable diligence happening in front of your eyes. If your employer is running Intent, you also aren't worrying about the cost of that diligence, since running multiple agents is expensive enough for private individuals without also worrying if they're being too 'conscientious' about their work.

I know Augment is using Opus 4.7 as the default model now, so this isn't my view on how Intent guides a particular model. It's a a warning that regular users might want to consider whether multi-agent, parallel workflows are actually the right move for them, regardless of cost.

4) Yes, I'm still bulleting here. The prioritisation and delegation of agents across different tasks is superb. Every tool like this is leveraging worktrees now, but Intent is the only one that I never had to go in to examine merge conflicts, feature collision and the like.

5) There's many nice touches that are worth exploring so I'd encourage you to give Intent a try, even if you think running a bunch of agents isn't for you. You really don't have to think about it in Intent. The living spec is so nicely done. If you've experimented with a bunch of context/memory systems like I have, this is the most sophisticated version of the simplest delivery for this challenge (basically something like a md updating itself as it goes along) due to its consistency and UI.

Wow, I just came here to write "Intent is really good". What happened?

I've been using Intent since the launch and it's fantastic at large scale objectives. I still use Augment's native VSCode plugin for odds and ends, but if I have a big task that requires changes across dozens of files and context from multiple repos, Intent is my weapon of choice. Augments team is insanely responsive if there's ever an issue and you will see updates pop up hours after bringing up any concerns. I've been with Augment from the start, and don't foresee anything surpassing it's capabilities anytime soon.

"Team of agents" is the interesting part here — what does coordination actually look like between them? Like if one agent writes the implementation and another is verifying it, what happens when the verifier catches something that requires a non-trivial architectural change? Does it loop back automatically or does that surface to the human?

@jaysym I see that it's free if we use our Claude-code/codex. How's the pricing decided if we enable the context engine for non-Auggie agents?

This looks very promising! Unfortunately, I can't test it on Windows yet.

I've been working with Augment in a WebStorm environment for over a year and I'm very happy with it.

However, I have two concerns regarding this next step:

a) How high will the token consumption be? I'm already using up my developer token allowance manually quite a bit. I usually have to top it up several times a month. If I imagine multiple agents working in parallel, orchestrated by even more agents, my token pool will be empty in just a few hours...?

b) I already have to closely monitor/review the activity of my one integrated agent and guide it in the right direction. Here, too, I see the risk that my incomplete/liquid spec will lead to absurdly high token consumption.

So: I think the idea is great, and I also think it will work very well.

But: Is it still affordable?

Congrats on the launch, looks great! As an Augment Code user spending most of the time in Auggie CLI, just wanted to check is there a timeline when this would be available to Linux users, or are there any plans for it?

Any oss repo of work done by intent? Or any PRs on existing oss repos we can refer to? What kind of token usage can we expect as compared to similar setup in cursor/cc or compare to human orchestrator.

Congrats on. the launch! What is the feasibility of adjusting the number of & type of agents you want for a given project, or is everything primarily decided for you?

Wrote up a post on how our teams collaborate within Intent. We've been able to effectively eliminate the designer/developer handoff. More details on the process, screenshots, etc: https://lukew.com/ff/entry.asp?2148

I like this. It seems very interesting. CLI code understanding and agent-driven development make a lot of sense. I'm just wondering, does it do any browser work? That's where most of my time currently gets sucked going back to the browser and seeing if everything has been implemented okay. I think that would be a great problem to tackle anyways. Love the product, and we'll give it a try. Best of luck.

Is there some form of hierarchy between the agents like in a real work scenario? Curious how they value each others opinions while working towards a shared goal

So does this work with multiple repos? Like old legacy code or needs a monorepo to work well?

Really interesting direction. Moving from copilots to coordinated agents working from a spec feels like a big shift. How do you see teams defining good specs so the output stays reliable?

Congrats on the launch! Wondering what’s its integration capabilities with many common SDLC software, because building in isolation is great until you need to do some real work.

@byalexai the parallel execution model is interesting — when the coordinator breaks a spec into tasks, how does it handle requirements that turn out to be underspecified only after an agent starts working? Revert and re-plan, or does it try to resolve in-context?

I've been using Augment Code on a large SaaS application with a strict domain-driven architecture, multi-tenancy, and dozens of interconnected domains. Most AI coding tools struggle with this kind

of complexity. Intent make it simple to launch coordinate agent to develop new feature in parallel!

Isolation and parallelism sound great until you hit merge conflicts and cross-service coupling. How does Intent structure workspaces/branches (e.g., via worktrees), handle dependency ordering between agent tasks, and reconcile changes into a clean PR without a human acting as the traffic cop?

the isolated workspace approach is smart. removes the "works on my machine" problem entirely when agents are doing the building. how do you handle dependencies that need specific system configs or external APIs during the build process?

About Intent on Product Hunt

Describe a feature and AI agents build, verify, and ship it

Intent launched on Product Hunt on April 15th, 2026 and earned 355 upvotes and 43 comments, earning #3 Product of the Day. Intent is a developer workspace built for agent-driven development. Define a feature as a spec, and a team of agents coordinates the work (from implementation to verification) inside an isolated workspace with built-in code, terminal, and git.

Intent was featured in Productivity (649.7k followers), Developer Tools (511k followers) and Artificial Intelligence (466.2k followers) on Product Hunt. Together, these topics include over 278.8k products, making this a competitive space to launch in.

Who hunted Intent?

Intent was hunted by Aleksandar Blazhev. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Reviews

Intent has received 10 reviews on Product Hunt with an average rating of 4.60/5. Read all reviews on Product Hunt.

Want to see how Intent stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.