More agents, less progress
In the previous lesson you learned the structural patterns through which agents collaborate — pipelines, fan-out, consensus, hierarchies. Each pattern solves a different coordination problem. But every one of them introduces a problem of its own: the coordination itself costs something.
That cost is not a side effect. It is not an inefficiency to be optimized away. It is a fundamental property of any system in which multiple autonomous entities must synchronize their behavior. The moment you move from one agent acting alone to two agents working together, you pay a tax. The tax is measured in time, attention, communication bandwidth, and decision latency. As the number of agents grows, the tax grows faster than the productive capacity the new agents bring.
This lesson is about that tax — what it is, why it scales the way it does, and how to keep it proportional to the benefit it enables.
The Ringelmann effect: the first measurement of coordination loss
The earliest empirical evidence that coordination costs erode group performance came from an unlikely source: a French agricultural engineer pulling ropes.
In the 1910s, Max Ringelmann conducted a series of experiments measuring the force exerted by individuals and groups pulling on a rope attached to a pressure gauge. His findings were striking. If one person could pull 100 units of force, two people together pulled only 186 — not 200. Three pulled 255, not 300. Eight people pulled 392, barely half of their theoretical combined capacity of 800 (Kravitz & Martin, 1986).
The loss had two components. The first was motivational: individuals exerted less effort when they knew others were contributing — a phenomenon later named social loafing. The second was mechanical: the larger the group, the harder it became to synchronize their pulls into a single coordinated action. Even when motivation was controlled for — by placing participants in "pseudo-groups" where confederates only pretended to pull — performance still dropped. Coordination loss was real and independent of effort (Ingham et al., 1974).
This is not about ropes. It is about the structure of joint effort. Any time multiple agents must synchronize to produce a combined output, the act of synchronizing consumes resources that would otherwise go to production. The more agents, the more synchronization required. The curve is not linear. It is worse than linear.
Brooks's Law: why adding people makes things slower
Fred Brooks learned Ringelmann's lesson the expensive way — by managing IBM's OS/360, one of the largest software projects of the 1960s. In The Mythical Man-Month (1975), he distilled the experience into what became known as Brooks's Law: "Adding manpower to a late software project makes it later."
The arithmetic is unforgiving. The number of bilateral communication channels in a group of n people is n(n - 1) / 2. Three people require three channels. Seven people require twenty-one. Fifty people require 1,225. Every channel represents a potential misunderstanding, a synchronization point, a place where information must be transmitted, received, interpreted, and confirmed.
But communication channels are only the visible cost. Brooks identified two additional costs that compound the problem. First, new members must be educated about the work that preceded them — which means the people who are already productive must stop producing to teach. Second, the task itself may not be perfectly partitionable. Some work is sequential: it cannot be parallelized no matter how many people you throw at it. Nine women cannot make a baby in one month.
Brooks's insight was that coordination overhead is not proportional to team size. It is super-linear. There exists a point — often reached sooner than managers expect — where adding another person generates more coordination cost than productive output. Beyond that point, every additional agent makes the system slower, not faster.
Coase's transaction costs: why organizations exist at all
Ronald Coase asked a question in 1937 that most economists had never thought to ask: if markets are efficient, why do firms exist?
His answer, published in "The Nature of the Firm," was that markets have hidden costs. Every market transaction requires finding a counterparty, negotiating terms, drafting contracts, monitoring compliance, and enforcing agreements. These transaction costs are the coordination overhead of doing business through the market. Firms exist because, up to a certain size, it is cheaper to coordinate work inside an organization — through hierarchy, employment relationships, and shared context — than to coordinate the same work through arm's-length market transactions (Coase, 1937).
But Coase's insight cuts both directions. As a firm grows, its internal coordination costs rise. Bureaucracy thickens. Communication slows. Decisions pass through more layers. At some point, the internal coordination overhead of the firm exceeds the transaction cost of using the market — and the firm stops growing. The boundary of the organization is the point where internal coordination costs and external transaction costs reach equilibrium.
This is the same principle operating at a different scale. Whether you are coordinating three people pulling a rope, fifty engineers building software, or ten thousand employees in a corporation, the underlying dynamic is identical: coordination is not free, it scales non-linearly with the number of participants, and there exists an optimal boundary beyond which adding more coordinated entities makes the system worse.
For your own cognitive infrastructure, the implication is direct. Every tool, process, habit, and information source you integrate into your personal system adds coordination overhead. The question is never "Is this tool useful?" The question is "Is this tool's value greater than the coordination cost of integrating and maintaining it?"
The cognitive cost: attention as the coordination bottleneck
Coordination overhead is not only an organizational phenomenon. It operates inside your own mind.
Research on communication overhead in team cognition demonstrates that coordination demands compete directly with the cognitive resources available for the actual task. In high-tempo, time-pressured environments — emergency response, surgical teams, military operations — the coordination burden can overload already strained responders, degrading the very performance that coordination was supposed to improve (Butchibabu et al., 2016).
The mechanism is attention. Every coordination act — checking a status update, aligning on terminology, confirming a handoff, resolving an ambiguity — draws from the same limited pool of attention that the productive work requires. When coordination consumes too much of that pool, production suffers. The team is busy being coordinated. It is not busy being productive.
Implicit coordination — the ability of well-practiced teams to synchronize without explicit communication — is the antidote. Teams that share mental models, common vocabulary, and predictable routines can coordinate with dramatically lower overhead. This is why experienced surgical teams outperform newly assembled ones even when individual skill levels are equivalent: the coordination cost drops as shared context rises (Reagans et al., 2005).
The lesson for personal systems is the same. When you establish routines, templates, and conventions, you are reducing your own internal coordination overhead. The effort you spend each morning deciding where to start, which tool to use, and what format to follow is coordination cost. Eliminate the decisions through structure, and you reclaim the attention for production.
The AI parallel: multi-agent systems and the coordination tax
If the Ringelmann effect and Brooks's Law describe coordination overhead in human systems, modern AI research is rediscovering the same constraints in silicon.
Multi-agent AI architectures — systems where multiple language model agents collaborate on a task — have exploded in popularity. The promise is compelling: decompose a complex problem, assign sub-problems to specialized agents, and recombine the results. But the empirical results reveal a familiar pattern.
Research from Anthropic and Google has shown that multi-agent systems can outperform single agents on parallelizable tasks — financial reasoning, document analysis, code generation across independent modules — by margins of 80-90%. But the cost is severe. Multi-agent systems consume up to 15 times more tokens than single agents. Token usage alone explains roughly 80% of the performance differences across architectures (Google Research, 2026).
More damning: on sequential reasoning tasks — problems that require maintaining a chain of logic across multiple steps — every multi-agent variant tested by researchers degraded performance by 39-70% compared to a single agent. The coordination overhead of passing context between agents fragmented the reasoning process, leaving insufficient "cognitive budget" for the actual problem. The agents spent so many tokens coordinating that they had fewer tokens left for thinking.
This is Brooks's Law expressed in transformer attention. Adding more agents to a sequential reasoning problem is adding more engineers to a late software project. The communication channels multiply. The shared context degrades. The coordination tax exceeds the productive capacity.
The practical implication for anyone designing agentic systems: never add agents to feel productive. Add agents only when the task is genuinely parallelizable and the coordination cost is measurably less than the throughput gain. A single agent doing focused work will outperform a committee of agents doing fragmented work on any task that requires sustained sequential reasoning.
The coordination budget: a protocol for proportionality
The primitive of this lesson is proportionality: keep coordination cost proportional to the benefit. Here is how to operationalize that.
Step 1: Make coordination visible. You cannot manage what you do not measure. For any team, project, or personal system, enumerate every coordination activity: meetings, standups, status emails, shared document reviews, Slack threads, approval chains, handoff protocols. Assign a time cost to each.
Step 2: Calculate the coordination ratio. Divide total coordination hours by total available hours. If your team has 200 person-hours per week and 80 of them go to meetings, status updates, and synchronization — that is a 40% coordination ratio. You are spending nearly half your capacity on overhead.
Step 3: Set a coordination budget. Define the maximum percentage of time you are willing to spend on coordination. For most knowledge work, 15-25% is a reasonable range. Below 15%, you risk misalignment. Above 30%, you are almost certainly over-coordinating.
Step 4: Enforce the budget through elimination. When a new coordination mechanism is proposed — a new meeting, a new reporting process, a new communication channel — it must fit within the budget. If it does not, something existing must be removed. This creates the forcing function that prevents coordination from expanding unchecked.
Step 5: Prefer implicit over explicit coordination. Invest in shared context, conventions, templates, and routines that allow agents to synchronize without active communication. Every coordination act you can make implicit is overhead removed from the budget permanently.
Where this leads
You now understand that coordination is not free and never will be. Every collaboration pattern from the previous lesson — pipeline, fan-out, consensus, hierarchy — carries a coordination tax. The pattern is not good or bad in isolation. It is good when its coordination cost is proportional to the benefit it delivers, and bad when the overhead exceeds the value.
This matters because in the next lesson, you will encounter emergent behavior — outcomes that arise from agent interaction but that no individual agent intended or predicted. Emergent behavior can be beneficial or catastrophic, and your ability to recognize and shape it depends on understanding the coordination dynamics that produce it. When coordination overhead is invisible and unmanaged, emergent behavior is also invisible and unmanaged. When you have a clear model of how your agents interact and what that interaction costs, you can begin to anticipate — and design for — what emerges from the interaction.
The principle is simple. Coordination overhead is the tax you pay for collaboration. Measure it. Budget it. Keep it proportional. And never mistake the tax for the work.
Sources:
- Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
- Coase, R. H. (1937). "The Nature of the Firm." Economica, 4(16), 386-405.
- Ringelmann, M. (1913). "Recherches sur les moteurs animés: Travail de l'homme." Annales de l'Institut National Agronomique, 12, 1-40.
- Kravitz, D. A., & Martin, B. (1986). "Ringelmann Rediscovered: The Original Article." Journal of Personality and Social Psychology, 50(5), 936-941.
- Ingham, A. G., Levinger, G., Graves, J., & Peckham, V. (1974). "The Ringelmann Effect: Studies of Group Size and Group Performance." Journal of Experimental Social Psychology, 10(4), 371-384.
- Google Research. (2026). "Towards a Science of Scaling Agent Systems: When and Why Agent Systems Work." Google Research Blog.
- Reagans, R., Argote, L., & Brooks, D. (2005). "Individual Experience and Experience Working Together: Predicting Learning Rates from Knowing Who Knows What and Knowing How to Work Together." Management Science, 51(6), 869-881.