You solved the resource problem. Now what?
In L-0512, you learned that multiple agents competing for the same scarce resource — your attention, a shared dataset, a single decision point — need explicit allocation rules. Resource contention is the first coordination failure you encounter when agents multiply. But resolving contention only prevents interference. It does not tell you how agents should actually work together.
Imagine you have five people in a room, and you have successfully prevented them from talking over each other. Congratulations — they are no longer fighting for the microphone. But they are also not accomplishing anything, because nobody has defined the structure of collaboration. Who speaks first? Who builds on whose output? Who works in parallel? Who synthesizes the results? These are not resource questions. They are pattern questions.
Agent collaboration patterns are the structural blueprints that define how multiple agents — human, cognitive, or artificial — coordinate their work. Get the pattern right, and coordination becomes a force multiplier. Get it wrong, and you spend more energy managing the collaboration than doing the work.
Thompson's taxonomy: the origin of coordination structure
The study of collaboration patterns did not begin with software engineering or artificial intelligence. It began with organizational theory.
In 1967, James D. Thompson published Organizations in Action, in which he identified three fundamental types of task interdependence that determine how work must be coordinated. His taxonomy remains the foundational framework for understanding why different collaboration patterns exist, because they arise from different dependency structures.
Pooled interdependence occurs when agents contribute independently to a shared outcome. Each agent's work stands on its own. A group of salespeople working different territories exemplifies pooled interdependence — each operates independently, and the organization benefits from the aggregate. The coordination mechanism is standardization: shared rules, formats, and quality standards that ensure independent outputs are compatible.
Sequential interdependence occurs when one agent's output becomes another agent's input. The dependency is directional — Agent B cannot begin until Agent A finishes. A manufacturing assembly line is the canonical example. The coordination mechanism is planning: schedules, deadlines, and handoff protocols that ensure the chain flows without gaps.
Reciprocal interdependence occurs when agents' outputs flow back and forth — Agent A produces input for Agent B, and Agent B's output feeds back to Agent A. Emergency room teams exemplify this: the surgeon's findings change the anesthesiologist's approach, which changes the surgical options. The coordination mechanism is mutual adjustment: real-time communication and continuous negotiation between agents.
Thompson's insight was not just descriptive. He argued that organizations should cluster reciprocally interdependent work most tightly, then sequentially interdependent work, and give the most autonomy to pooled work. The coordination cost increases as you move from pooled to sequential to reciprocal, and organizational structure should reflect that cost gradient. The same principle applies to your cognitive agents, your AI pipelines, and your team workflows.
Malone and Crowston: coordination as dependency management
In 1994, Thomas Malone and Kevin Crowston extended Thompson's work into a general coordination theory, published in ACM Computing Surveys. Their central claim was precise: coordination is the process of managing dependencies among activities. Not managing people. Not managing communication. Managing dependencies.
This reframing changes how you think about collaboration patterns. Each pattern is not an arbitrary arrangement of agents. It is a specific solution to a specific dependency structure. When you choose a pipeline, you are declaring that your tasks have sequential dependencies. When you choose fan-out parallelism, you are declaring that your tasks are independent. When you choose consensus, you are declaring that integration requires collective evaluation.
Malone and Crowston identified several generic dependency types — shared resources, producer-consumer relationships, simultaneity constraints, task-subtask decomposition — and showed that each dependency type has a characteristic set of coordination mechanisms. The implication is that you do not choose a collaboration pattern based on preference. You choose it based on analysis of the actual dependency structure in your work.
This is where most coordination failures originate. People apply patterns they are comfortable with rather than patterns that match their dependencies. A team that runs everything through sequential approval chains when most of the work is pooled-independent is paying sequential coordination costs for pooled work. A team that parallelizes reciprocally interdependent tasks will produce outputs that conflict, requiring expensive reconciliation.
The four canonical collaboration patterns
Four patterns recur across every domain where multiple agents coordinate — from cognitive science to software architecture to organizational design to AI systems. Each corresponds to a dependency structure, and each carries distinct coordination costs.
Pipeline (sequential chain). Agent A completes a task and passes the output to Agent B, who completes the next task and passes to Agent C. The dependency is strictly sequential. The pipeline pattern is optimal when each stage requires the full output of the previous stage and when stages cannot meaningfully begin with partial information. Editing a document after writing it, compiling code after authoring it, quality-checking a product after manufacturing it — all are pipeline patterns. The cost is latency: the total time equals the sum of all stage times, and any bottleneck in the chain delays everything downstream.
Fan-out (parallel dispatch). A single agent or trigger distributes independent subtasks to multiple agents simultaneously. The dependency is pooled — each agent works on a self-contained piece. Research tasks where multiple people investigate different sources, A/B tests running simultaneously across different user segments, and map-reduce computations that split data across workers all use fan-out. The benefit is throughput: total time equals the longest single agent's time, not the sum. The cost is that the work must be genuinely independent — hidden dependencies between parallel branches create conflicts that are expensive to resolve.
Fan-in (aggregation). The complement of fan-out. Multiple agents' outputs converge on a single agent or process that synthesizes, merges, or selects among them. Fan-in always follows fan-out, and the aggregation step is where most of the coordination complexity lives. How do you merge three independently written sections into a coherent document? How do you reconcile conflicting research findings? How do you select the best among parallel solutions? The aggregation pattern requires explicit merge criteria — without them, fan-in becomes a bottleneck that erases the time savings from fan-out.
Consensus (collective evaluation). All agents evaluate a shared artifact and must reach agreement before proceeding. The dependency is reciprocal — every agent's judgment influences every other agent's judgment. Code reviews, editorial boards, jury deliberation, and peer review all use consensus patterns. The benefit is quality: consensus catches errors and blind spots that any single agent would miss. The cost is time and the risk of deadlock — agents may not converge, especially when evaluation criteria are ambiguous.
These patterns compose. A real workflow is rarely a single pattern. It is a pipeline where one stage contains a fan-out/fan-in, followed by a consensus gate, feeding into another pipeline. The skill is not knowing the four patterns in isolation. It is recognizing which pattern operates at each junction in your workflow and whether it matches the dependency structure at that junction.
Transactive memory: how human teams learn their patterns
Cognitive science offers a powerful lens on why some teams coordinate well and others fumble despite using the same structural patterns.
Daniel Wegner introduced the concept of transactive memory systems in 1985, describing how groups develop shared knowledge not of the content itself, but of who knows what. A transactive memory system has three components: specialization (each member develops distinct expertise), credibility (members trust each other's expertise), and coordination (members can efficiently retrieve knowledge from the right person). Research across 76 empirical studies has demonstrated a strong positive relationship between transactive memory system development and team performance.
The connection to collaboration patterns is direct. A team with strong transactive memory does not need to run a consensus pattern on every decision, because members know whose expertise is relevant and can route decisions to the right specialist — effectively switching from expensive consensus to efficient pipeline or delegation patterns. A team without transactive memory defaults to consensus on everything, because nobody knows who to trust with which decisions.
This is equally true for your internal cognitive agents. When you know which of your mental processes is reliable for which type of judgment — when you have a transactive memory system for your own cognitive infrastructure — you can route tasks to the right internal agent without convening an internal committee for every decision. Self-knowledge reduces coordination overhead by enabling pattern selection.
The AI parallel: orchestration patterns in multi-agent systems
The collaboration patterns you use to coordinate human teams and cognitive agents are the same patterns being engineered into AI multi-agent systems — often with the same names.
In modern LLM-based multi-agent architectures, the pipeline pattern appears as sequential chains where one agent's output becomes the next agent's prompt context. Microsoft's Azure Architecture Center documents this as the "chain" orchestration pattern — suitable when reasoning must happen in strict stages, each building on the previous result.
Fan-out appears as parallel dispatch, where a coordinator agent sends the same query or different subtasks to multiple specialist agents simultaneously. Google's Agent Development Kit implements this as the parallel pattern, where subagents work independently in their own context windows and relay findings back to a lead agent. Anthropic's production measurements show that a lead agent dispatching to specialized subagents achieves substantially better performance than a single agent working alone — the fan-out/fan-in pattern is not just an organizational convenience but a measurable performance improvement.
Consensus appears as voting or validation patterns, where multiple agents independently generate solutions and a synthesizer agent selects the best response or merges them. The Consensus-LLM research program demonstrated that language models can negotiate and align through structured multi-round deliberation, mimicking the same consensus dynamics that operate in human committees.
The convergence is not coincidental. These patterns recur because they are solutions to the same dependency structures Thompson identified in 1967. Whether the agents are humans in an organization, cognitive processes in your mind, or language models in a software system, the dependency types are identical — and therefore the coordination patterns are identical.
Matching patterns to dependencies: the design protocol
Knowing the patterns is insufficient. You need a method for selecting the right pattern at each coordination point. Here is a four-step protocol.
Step 1: Map the tasks. List every discrete task in your workflow. A task is a unit of work that produces a defined output. Be specific — "research" is too vague; "find three peer-reviewed sources on coordination theory" is a task.
Step 2: Map the dependencies. For each pair of tasks, ask: does Task B require the output of Task A? If yes, the dependency is sequential — pipeline. If neither requires the other's output, the dependency is pooled — candidates for fan-out. If each requires the other's output iteratively, the dependency is reciprocal — requires mutual adjustment or consensus.
Step 3: Select the pattern. Sequential dependencies get pipeline patterns. Independent tasks get fan-out with a defined fan-in aggregation step. Reciprocal dependencies get consensus or iterative review patterns. Mixed dependencies get composed patterns — a pipeline where one stage contains a fan-out/fan-in.
Step 4: Define the handoff. For each pattern boundary, specify what gets passed. A pipeline handoff needs a defined output format. A fan-out needs a defined task specification for each branch. A fan-in needs explicit merge criteria. A consensus step needs defined evaluation criteria and a convergence rule (majority vote, unanimous agreement, designated tiebreaker).
This protocol is dependency-first design. You do not start with the pattern you prefer or the one that feels natural. You start with the dependency structure of the actual work, and the structure tells you which pattern fits.
Why this matters for everything that follows
Collaboration patterns are the structural vocabulary of coordination. Without them, you are describing every workflow from scratch, unable to reuse solutions, unable to diagnose failures, unable to communicate design intent to collaborators.
With them, you can say: "This project is a pipeline with a fan-out at stage three, a consensus gate before stage four, and a pipeline to completion." That single sentence communicates the entire coordination architecture. More importantly, when something breaks, the pattern tells you where to look. A pipeline bottleneck means one stage is too slow. A fan-out failure means parallel branches had hidden dependencies. A consensus deadlock means evaluation criteria are ambiguous.
But patterns are not free. Every pattern carries coordination overhead — the cost of managing handoffs, maintaining shared context, resolving conflicts, and synchronizing agents. The next lesson, L-0514, addresses this directly: how to measure coordination cost, keep it proportional to the benefit, and recognize when the overhead of a pattern exceeds the value of the collaboration it enables.
You now have the structural vocabulary. Next, you learn the economics.
Sources:
- Thompson, J. D. (1967). Organizations in Action: Social Science Bases of Administrative Theory. McGraw-Hill.
- Malone, T. W., & Crowston, K. (1994). "The Interdisciplinary Study of Coordination." ACM Computing Surveys, 26(1), 87-119.
- Wegner, D. M. (1985). "Transactive Memory: A Contemporary Analysis of the Group Mind." In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior. Springer-Verlag.
- DeChurch, L. A., & Mesmer-Magnus, J. R. (2010). "The Cognitive Underpinnings of Effective Teamwork: A Meta-Analysis." Journal of Applied Psychology, 95(1), 32-53.
- Microsoft Azure Architecture Center. (2025). "AI Agent Orchestration Patterns." Microsoft Learn.
- Google Developers Blog. (2025). "Developer's Guide to Multi-Agent Patterns in ADK."
- Guo, T., et al. (2025). "Multi-Agent Collaboration Mechanisms: A Survey of LLMs." arXiv:2501.06322.