As someone who's built a project in this space, this is incredibly unreliable. Subagents don't get a full system prompt (including stuff like CLAUDE.md directions) so they are flying very blind in your projects, and as such will tend to get derailed by their lack of knowledge of a project and veer into mock solutions and "let me just make a simpler solution that demonstrates X."
I advise people to only use subagents for stuff that is very compartmentalized because they're hard to monitor and prone to failure with complex codebases where agents live and die by project knowledge curated in files like CLAUDE.md. If your main Claude instance doesn't give a good handoff to a subagent, or a subagent doesn't give a good handback to the main Claude, shit will go sideways fast.
Also, don't lean on agents for refactoring. Their ability to refactor a codebase goes in the toilet pretty quickly.
> Their ability to refactor a codebase goes in the toilet pretty quickly.
Very much this. I tried to get Claude to move some code from one file to another. Some of the code went missing. Some of it was modified along the way.
Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
Claude can only reliably do this refactoring if it can keep the start and end files in context. This was a large file, so it got lost. Even then it needs direct supervision.
> Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
For my own agent I have a `move_file` and `copy_file` tool with two args each, that at least GPT-OSS seems to be able to use whenever it suits, like for moving stuff around. I've seen it use it as part of refactoring as well, moving a file to one location, copying that to another, the trim both of them but different trims, seems to have worked OK.
If the agent has access to `exec_shell` or similar, I'm sure you could add `Use mv and cp if you need to move or copy files` to the system prompt to get it to use that instead, probably would work in Claude Code as well.
This is only a problem if an agent is made in a lazy way (all of them).
Chat completion sends the full prompt history on every call.
I am working on my own coding agent and seeing massive improvements by rewriting history using either a smaller model or a freestanding call to the main one.
There's a large body of research on context pruning/rewriting (I know because I'm knee deep in benchmarks in release prep for my context compiler), definitely don't ad hoc this.
One key insight I have from having worked on this from the early stages of LLMs (before chatgpt came out) is that the current crop of LLM clients or "agentic clients" don't log/write/keep track of success over time. It's more of a "shoot and forget" environment right now, and that's why a lot of people are getting vastly different results. Hell, even week to week on the same tasks you get different results (see the recent claude getting dumber drama).
Once we start to see that kind of self feedback going in next iterations (w/ possible training runs between sessions, "dreaming" stage from og RL, distilling a session, grabbing key insights, storing them, surfacing them at next inference, etc) then we'll see true progress in this space.
The problem is that a lot of people work on these things in silos. The industry is much more geared towards quick returns now, having to show something now, rather than building strong fo0undations based on real data. Kind of an analogy to early linux dev. We need our own Linus, it would seem :)
I’ve experimented with feature chats, so start a new chat for every change, just like a feature branch. At the end of a chat I’ll have it summarize the the feature chat and save it as a markdown document in the project, so the knowledge is still available for next chats. Seems to work well.
You can also ask the llm at the end of a feature chat to prepare a prompt to start the next feature chat so it can determine what knowledge is important to communicate to the next feature chat.
Summarizing a chat also helps getting rid of wrong info, as you’ll often trial and error towards the right solution. You don’t want these incorrect approaches to leak into the context of the next feature chat, maybe just add the “don’t dos” into a guidelines and rules document so it will avoid it in the future.
Lots of ways. You could do binary thumbs up/down. You could do a feedback session. You could look at signals like "acceptance rate" (for a pr?) or "how many feedback messages did the user send in this session", and so on.
My point was more on tracking these signals over time. And using them to improve the client, not just the model (most model providers probably track this already).
Totally agreed, tried agents for a lot of stuff (I started creating a team of agents, architect, frontend coder, backend coder and QA). Spent around 50 USD on a failed project, context contaminated and the project eventually had to be re-written.
Then I moved some parts in rules, some parts in slash commands and then I got much better results.
The subagents are like a freelance contractors (I know, I have been one very recently) Good when they need little handoff (Not possible in realtime), little overseeing and their results are a good advice not an action. They don't know what you are doing, they don't care what you do with the info they produce. They just do the work for you while you do something else, or wait for them to produce independent results. They come and go with little knowledge of existing functionalities, but good on their own.
Here are 3 agents I still keep and one I am working on.
1: Scaffolding: Now I create (and sometimes destroy) a lot of new projects. I use a scaffolding agents when I am trying something new. They start with fresh one line instruction to what to scaffold (e.g. a New docker container with Hono and Postgres connection, or a new cloudflare worker which will connect to R2, D1 and AI Gateway, or a AWS Serverless API Gateway with SQS that does this that and that), where to deploy. At the end of the day they setup the project with structure, create a Github Repo and commit it for me. I will take it forward from them
2: Triage: When I face some issues which is not obvious from reading code alone, I give them the place, some logs and the agent will use whatever available (including the DB Data) to make a best guess of why this issue happens. I often found out they work best when they are not biased by recent work
3: Pre-Release Check QA: Now this QA will test the entire system (Essentially calling all integration and end-to-end test suite to make sure this product doesn't break anything existing. Now I am adding a functionality to let them see original business requirement and see if the code satisfies it or not. I want this agent to be my advisor to help me decide if something goes to release pipeline or not.
4: Web search (Experimental) Sometimes, some search are too costly for existing token, and we only need the end result, not what they search and those 10 pages it found out...
I'm commenting while agents run in project trying to achieve something similar to this.
I feel like "we all" are trying to do something similar, in different ways, and in a fast moving space (i use claude code and didn't even know subagents were a thing).
My gut feeling from past experiences is that we have git, but now git-flow, yet: a standardized approach that is simple to learn and implement across teams.
Once (if?) someone will just "get it right", and has a reliable way to break this down do the point that engineer(s) can efficiently review specs and code against expectations, it'll be the moment where being a coder will have a different meaning, at large.
So far, all projects i've seen end up building "frameworks" to match each person internal workflow. That's great and can be very effective for the single person (it is for me), but unless that can be shared across teams, throughput will still be limited (when compared that of a team of engs, with the same tools).
Also, refactoring a project to fully leverage AI workflows might be inefficient, if compared to a rebuild from scratch to implement that from zero, since building docs for context in pair with development cannot be backported: it's likely already lost in time, and accrued as technical debt.
How do you not get lost mentally in what is exactly happening at each point in time? Just trusting the system and reviewing the final output? I feel like my cognitive constraints become the limits of this parallelized system. With a single workstream I pollute context, but feel way more secure somehow.
I often see people making these sub agents modelled on roles like product manager, back end developer, etc.
I spent a few hours trying stuff like this and the results were pretty bad compared to just using CC with no agent specific instructions.
Maybe I needed to push through and find a combination that works but I don't find this article convincing as the author basically says "it works" without showing examples or comparing doing the same project with and without subagents.
Anyone got anything more convincing to suggest it's worth me putting more time into building out flows like this instead of just using a generic agent for everything?
I see lots of people saying you should be doing it, but not actually doing it themselves.
Or at least, not showing full examples of exactly how to handle it when it starts to fail or scale, because obviously when you dont have anything, having a bunch of agents doing any random shit works fine.
Right - don’t make subagents for the different roles, make them to manage context for token heavy tasks.
A backend developer subagent is going to do the job ok, but then the supervisor agent will be missing useful context about what’s been done and will go off the rails.
The ideal sub agent is one that can take a simple question, use up massive amounts of tokens answering it, and then return a simple answer, dropping all those intermediate tokens as unnecessary.
Documentation Search is a good one - does X library have a Y function - the subagent can search the web, read doc MCPs, and then return a simple answer without the supervisor needing to be polluted with all the context
That sounds crazy to me, Claude Code has so many limitations.
Last week I asked Claude Code to set up a Next.js project with internationalization. It tried to install a third party library instead of using the internationalization method recommended for the latest version of Next.js (using Next's middleware) and could not produce of functional version of the boilerplate site.
There are some specific cases where agentic AI does help me but I can't picture an agent running unchecked effectively in its current state.
Slightly off topic but I would really like agentic workflow that is embedded in my IDE as well as my code host provider like GitHub for pull requests.
Ideally I would like to spin off multiple agents to solve multiple bugs or features. The agents have to use the ci in GitHub to get feedback on tests. And I would like to view it on IDE because I like the ability to understand code by jumping through definitions.
Support for multiple branches at once - I should be able to spin off multiple agents that work on multiple branches simultaneously.
Why not just use only async agents? You can fire off many tasks and check PRs locally when they complete the work. (I also work on devfleet.ai to improve this experience, any feedback is appreciated)
This already exists. Look at cursor with Linear, you can just reply with @cursor & some instructions and it starts working in a vm. You can watch it work on cursor.com/agents or using the cursor editor. Result is a PR. Also github has copilot getting integrated in the github ui, but not that great in my experience
Would that be solved by having several clones of your repo, each with a IDE and a Claude working on each problem? Much like how multiple people work in parallel.
Is it a good idea to generate more code faster to solve problems? Can I solve problems without generating code?
If code is a liability and the best part is no part, what about leveraging Markdown files only?
The last programs I created were just CLI agents with Markdown files and MCP servers(some code here but very little).
The feedback loop is much faster, allowing me to understand what I want after experiencing it, and self-correction is super fast. Plus, you don't get lost in the implementation noise.
Code you didn't write is an even bigger liability, because if the AI gets off track and you can't guide it back, you may have to spend the time to learn it's code and fix the bugs.
It's no different to inheriting a legacy application though. As well, from the perspective of a product owner, it's not a new risk.
Claude is a junior. The more you work with it, the more you get a feel for which tasks it will ace unsupervised (some subset of grunt work) and which tasks to not even bother using it for.
I don't trust Claude to write reams of code that I can't maintain except when that code is embarrassingly testable, i.e it has an external source of truth.
Was going to ask how much all this cost, but this sort of answers it:
> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."
As someone who's built a project in this space, this is incredibly unreliable. Subagents don't get a full system prompt (including stuff like CLAUDE.md directions) so they are flying very blind in your projects, and as such will tend to get derailed by their lack of knowledge of a project and veer into mock solutions and "let me just make a simpler solution that demonstrates X."
I advise people to only use subagents for stuff that is very compartmentalized because they're hard to monitor and prone to failure with complex codebases where agents live and die by project knowledge curated in files like CLAUDE.md. If your main Claude instance doesn't give a good handoff to a subagent, or a subagent doesn't give a good handback to the main Claude, shit will go sideways fast.
Also, don't lean on agents for refactoring. Their ability to refactor a codebase goes in the toilet pretty quickly.
> Their ability to refactor a codebase goes in the toilet pretty quickly.
Very much this. I tried to get Claude to move some code from one file to another. Some of the code went missing. Some of it was modified along the way.
Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
Claude can only reliably do this refactoring if it can keep the start and end files in context. This was a large file, so it got lost. Even then it needs direct supervision.
> Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
For my own agent I have a `move_file` and `copy_file` tool with two args each, that at least GPT-OSS seems to be able to use whenever it suits, like for moving stuff around. I've seen it use it as part of refactoring as well, moving a file to one location, copying that to another, the trim both of them but different trims, seems to have worked OK.
If the agent has access to `exec_shell` or similar, I'm sure you could add `Use mv and cp if you need to move or copy files` to the system prompt to get it to use that instead, probably would work in Claude Code as well.
I don't use subagents to do things, they're best for analysing things.
Like "evaluate the test coverage" or "check if the project follows the style guide".
This way the "main" context only gets the report and doesn't waste space on massive test outputs or reading multiple files.
This is only a problem if an agent is made in a lazy way (all of them).
Chat completion sends the full prompt history on every call.
I am working on my own coding agent and seeing massive improvements by rewriting history using either a smaller model or a freestanding call to the main one.
It really mitigates context poisoning.
I do something similar and I have the best results of not having a history at all, but setting the context new with every invokation.
There's a large body of research on context pruning/rewriting (I know because I'm knee deep in benchmarks in release prep for my context compiler), definitely don't ad hoc this.
Everyone complains that when you compact the context, Claude tends to get stupid
Which as far as I understand it is summarizing the context with a smaller model.
Am I misunderstanding you, as the practical experience of most people seem to contradict your results.
One key insight I have from having worked on this from the early stages of LLMs (before chatgpt came out) is that the current crop of LLM clients or "agentic clients" don't log/write/keep track of success over time. It's more of a "shoot and forget" environment right now, and that's why a lot of people are getting vastly different results. Hell, even week to week on the same tasks you get different results (see the recent claude getting dumber drama).
Once we start to see that kind of self feedback going in next iterations (w/ possible training runs between sessions, "dreaming" stage from og RL, distilling a session, grabbing key insights, storing them, surfacing them at next inference, etc) then we'll see true progress in this space.
The problem is that a lot of people work on these things in silos. The industry is much more geared towards quick returns now, having to show something now, rather than building strong fo0undations based on real data. Kind of an analogy to early linux dev. We need our own Linus, it would seem :)
I’ve experimented with feature chats, so start a new chat for every change, just like a feature branch. At the end of a chat I’ll have it summarize the the feature chat and save it as a markdown document in the project, so the knowledge is still available for next chats. Seems to work well.
You can also ask the llm at the end of a feature chat to prepare a prompt to start the next feature chat so it can determine what knowledge is important to communicate to the next feature chat.
Summarizing a chat also helps getting rid of wrong info, as you’ll often trial and error towards the right solution. You don’t want these incorrect approaches to leak into the context of the next feature chat, maybe just add the “don’t dos” into a guidelines and rules document so it will avoid it in the future.
> don't log/write/keep track of success over time.
How do you define success of a model's run?
Lots of ways. You could do binary thumbs up/down. You could do a feedback session. You could look at signals like "acceptance rate" (for a pr?) or "how many feedback messages did the user send in this session", and so on.
My point was more on tracking these signals over time. And using them to improve the client, not just the model (most model providers probably track this already).
Totally agreed, tried agents for a lot of stuff (I started creating a team of agents, architect, frontend coder, backend coder and QA). Spent around 50 USD on a failed project, context contaminated and the project eventually had to be re-written.
Then I moved some parts in rules, some parts in slash commands and then I got much better results.
The subagents are like a freelance contractors (I know, I have been one very recently) Good when they need little handoff (Not possible in realtime), little overseeing and their results are a good advice not an action. They don't know what you are doing, they don't care what you do with the info they produce. They just do the work for you while you do something else, or wait for them to produce independent results. They come and go with little knowledge of existing functionalities, but good on their own.
Here are 3 agents I still keep and one I am working on.
1: Scaffolding: Now I create (and sometimes destroy) a lot of new projects. I use a scaffolding agents when I am trying something new. They start with fresh one line instruction to what to scaffold (e.g. a New docker container with Hono and Postgres connection, or a new cloudflare worker which will connect to R2, D1 and AI Gateway, or a AWS Serverless API Gateway with SQS that does this that and that), where to deploy. At the end of the day they setup the project with structure, create a Github Repo and commit it for me. I will take it forward from them
2: Triage: When I face some issues which is not obvious from reading code alone, I give them the place, some logs and the agent will use whatever available (including the DB Data) to make a best guess of why this issue happens. I often found out they work best when they are not biased by recent work
3: Pre-Release Check QA: Now this QA will test the entire system (Essentially calling all integration and end-to-end test suite to make sure this product doesn't break anything existing. Now I am adding a functionality to let them see original business requirement and see if the code satisfies it or not. I want this agent to be my advisor to help me decide if something goes to release pipeline or not.
4: Web search (Experimental) Sometimes, some search are too costly for existing token, and we only need the end result, not what they search and those 10 pages it found out...
I'm commenting while agents run in project trying to achieve something similar to this. I feel like "we all" are trying to do something similar, in different ways, and in a fast moving space (i use claude code and didn't even know subagents were a thing).
My gut feeling from past experiences is that we have git, but now git-flow, yet: a standardized approach that is simple to learn and implement across teams.
Once (if?) someone will just "get it right", and has a reliable way to break this down do the point that engineer(s) can efficiently review specs and code against expectations, it'll be the moment where being a coder will have a different meaning, at large.
So far, all projects i've seen end up building "frameworks" to match each person internal workflow. That's great and can be very effective for the single person (it is for me), but unless that can be shared across teams, throughput will still be limited (when compared that of a team of engs, with the same tools).
Also, refactoring a project to fully leverage AI workflows might be inefficient, if compared to a rebuild from scratch to implement that from zero, since building docs for context in pair with development cannot be backported: it's likely already lost in time, and accrued as technical debt.
How do you not get lost mentally in what is exactly happening at each point in time? Just trusting the system and reviewing the final output? I feel like my cognitive constraints become the limits of this parallelized system. With a single workstream I pollute context, but feel way more secure somehow.
I often see people making these sub agents modelled on roles like product manager, back end developer, etc.
I spent a few hours trying stuff like this and the results were pretty bad compared to just using CC with no agent specific instructions.
Maybe I needed to push through and find a combination that works but I don't find this article convincing as the author basically says "it works" without showing examples or comparing doing the same project with and without subagents.
Anyone got anything more convincing to suggest it's worth me putting more time into building out flows like this instead of just using a generic agent for everything?
No, this has been my experience as well.
I see lots of people saying you should be doing it, but not actually doing it themselves.
Or at least, not showing full examples of exactly how to handle it when it starts to fail or scale, because obviously when you dont have anything, having a bunch of agents doing any random shit works fine.
Frustrating.
Right - don’t make subagents for the different roles, make them to manage context for token heavy tasks.
A backend developer subagent is going to do the job ok, but then the supervisor agent will be missing useful context about what’s been done and will go off the rails.
The ideal sub agent is one that can take a simple question, use up massive amounts of tokens answering it, and then return a simple answer, dropping all those intermediate tokens as unnecessary.
Documentation Search is a good one - does X library have a Y function - the subagent can search the web, read doc MCPs, and then return a simple answer without the supervisor needing to be polluted with all the context
This is exactly right.
This has been my experience so far as well. It seems like just basic prompting gets me much further than all these complicated extras.
At some point you gotta stop and wonder if you’re doing way too much work managing claude rather than your business problem.
I think the trick is the synthesize step which brings the agents findings together. That's where I've had the most success, at least.
One can hardly control one coding agent for correctness, let alone multiple ones... It's cool, but not very reliable or useful.
That sounds crazy to me, Claude Code has so many limitations.
Last week I asked Claude Code to set up a Next.js project with internationalization. It tried to install a third party library instead of using the internationalization method recommended for the latest version of Next.js (using Next's middleware) and could not produce of functional version of the boilerplate site.
There are some specific cases where agentic AI does help me but I can't picture an agent running unchecked effectively in its current state.
Slightly off topic but I would really like agentic workflow that is embedded in my IDE as well as my code host provider like GitHub for pull requests.
Ideally I would like to spin off multiple agents to solve multiple bugs or features. The agents have to use the ci in GitHub to get feedback on tests. And I would like to view it on IDE because I like the ability to understand code by jumping through definitions.
Support for multiple branches at once - I should be able to spin off multiple agents that work on multiple branches simultaneously.
Why not just use only async agents? You can fire off many tasks and check PRs locally when they complete the work. (I also work on devfleet.ai to improve this experience, any feedback is appreciated)
This already exists. Look at cursor with Linear, you can just reply with @cursor & some instructions and it starts working in a vm. You can watch it work on cursor.com/agents or using the cursor editor. Result is a PR. Also github has copilot getting integrated in the github ui, but not that great in my experience
Would that be solved by having several clones of your repo, each with a IDE and a Claude working on each problem? Much like how multiple people work in parallel.
Yeah but it’s not ideal. I thought of this too.
Is it a good idea to generate more code faster to solve problems? Can I solve problems without generating code?
If code is a liability and the best part is no part, what about leveraging Markdown files only?
The last programs I created were just CLI agents with Markdown files and MCP servers(some code here but very little).
The feedback loop is much faster, allowing me to understand what I want after experiencing it, and self-correction is super fast. Plus, you don't get lost in the implementation noise.
Code you didn't write is an even bigger liability, because if the AI gets off track and you can't guide it back, you may have to spend the time to learn it's code and fix the bugs.
It's no different to inheriting a legacy application though. As well, from the perspective of a product owner, it's not a new risk.
Claude is a junior. The more you work with it, the more you get a feel for which tasks it will ace unsupervised (some subset of grunt work) and which tasks to not even bother using it for.
I don't trust Claude to write reams of code that I can't maintain except when that code is embarrassingly testable, i.e it has an external source of truth.
There is no generated code. It is just a user interacting with a CLI terminal(via librechat frontend), guided by Markdown files, with access to MCPs
Using LLMs to code poses a liability most people can't appreciate, and won't admit:
https://www.youtube.com/watch?v=wL22URoMZjo
Have a great day =3
Was going to ask how much all this cost, but this sort of answers it:
> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."
All of this stuff seems completely insane to me and something my coding agent should handle for me. And it probably will in a year.
I feel the same. We’re still in the very early days of AI agents. Honestly, just orchestrating CC subagents alone could already be a killer product.
Follow up from my last post; lots were asking for more examples. I will be around if anybody has questions this morning.
Can it work without Linear, using md files?
I’ve got this down to a science.