Your agents are coordinating in the dark
In the previous lesson, you learned the difference between parallel and sequential agent execution — which agents can run simultaneously and which must wait for another's output before they begin. But that distinction quietly assumes something you may not have examined: that agents running in parallel or in sequence have access to the same information when they need it.
They usually do not.
Most people run multiple cognitive agents — planning routines, review processes, decision frameworks, tracking systems — that operate on entirely separate views of reality. Your weekly review does not see the data your daily journal collects. Your financial planning agent has no access to the goals your career agent set. Your health-tracking routine produces data that your productivity system never consults. Each agent is locally competent and globally blind. They produce good outputs in isolation and contradictory outputs in combination, because no one designed the information substrate they share.
This is the shared state problem. And it is not a minor optimization issue. It is the single largest source of coordination failure in any multi-agent system — whether that system is a software architecture, a team of humans, or the cognitive infrastructure inside your own head.
What shared state actually means
Shared state is the information that multiple agents can read from and write to during their operation. It is the common ground — the subset of reality that is visible to more than one agent and that each agent can update based on its own processing.
In software engineering, shared state is a precise technical concept. A database that two microservices both query is shared state. A message queue that one service writes to and another reads from is shared state. A configuration file that multiple processes depend on is shared state. The concept is simple. The design is not.
The reason shared state is difficult is that it creates coupling. When two agents share state, a change by one agent affects the other. This is simultaneously the entire point — you want agents to inform each other — and the primary source of failure. Uncontrolled shared state produces race conditions, stale reads, conflicting writes, and cascading failures. Controlled shared state produces coordination, coherence, and emergent intelligence that no individual agent could achieve alone.
The distinction between these outcomes is not luck. It is design. And the design principles that govern shared state in distributed computer systems turn out to be the same principles that govern shared knowledge in human teams and shared representations in cognitive systems.
Hutchins and the cockpit: cognition is already distributed
In 1995, cognitive anthropologist Edwin Hutchins published Cognition in the Wild, a detailed ethnographic study of how navigation teams on a U.S. Navy ship coordinate their work. His central finding upended a core assumption of cognitive science: that cognition happens inside individual heads.
Hutchins observed that the process of plotting a ship's course was not the work of any single mind. It was the work of a system — people, instruments, charts, verbal callouts, and physical procedures — that together computed a result no individual member could compute alone. The bearing taker read angles. The bearing recorder wrote them on a form. The plotter transferred those numbers to a chart. The navigator interpreted the resulting position. Each person was an agent with a specific role, and the information flowed between them through shared representational artifacts — the forms, the chart, the spoken numbers.
Hutchins later studied airline cockpits and found the same pattern. He noted that "speed bugs do not help pilots remember speeds; rather, they are part of the process by which the cockpit system remembers speeds." The shared state — the physical settings on instruments, the printed checklists, the verbal callouts between pilot and copilot — is not supplementary to cognition. It is where the cognition happens. Remove the shared representational medium and the system does not merely perform worse. It cannot perform at all.
This is the lesson for your own cognitive infrastructure. When you run multiple agents — planning, executing, reviewing, tracking — the intelligence of the overall system does not live inside any individual agent. It lives in the shared state between them. A planning agent that cannot see your energy data is not a slightly worse planner. It is a planner operating on an incomplete model of reality, guaranteed to produce plans that conflict with your actual capacity.
Transactive memory: knowing who knows what
Daniel Wegner proposed the concept of transactive memory in 1985, and it provides the second foundational insight about shared state. Wegner studied how couples develop shared memory systems and discovered something remarkable: long-term partners do not simply share the same memories. They develop a division of memorial labor. One partner remembers the social calendar. The other remembers the financial details. Each partner knows what the other knows — and that meta-knowledge, that awareness of where information lives, functions as a shared directory that makes the entire system smarter than either individual.
Wegner and colleagues demonstrated this experimentally. Couples who had been together for at least three months outperformed pairs of strangers at remembering information across categories — not because they had better individual memories, but because they had developed a transactive memory system with specialization (each partner stored different information), credibility (each partner trusted the other's domain expertise), and coordination (each partner knew how to access what the other stored).
This maps directly to the shared state problem. In a transactive memory system, the shared state is not a single pool of all information. It is a structured directory — a map of what information exists, where it lives, and how to access it. Each agent retains its own specialized state. The shared layer is the index, the routing table, the knowledge of which agent holds which piece of the picture.
When you design shared state for your own multi-agent system, this is the model to follow. You do not need every agent to see everything. You need every agent to know what other agents produce and where to find the outputs that are relevant to its own operation. The shared state is not a dump of raw data. It is a curated interface.
Shared mental models: the invisible coordination layer
Research on team cognition has identified a third mechanism by which shared state enables coordination: shared mental models. Cannon-Bowers, Salas, and their colleagues demonstrated through the 1990s and 2000s that high-performing teams share not just information but interpretive frameworks — common models of how the task works, what the goals are, and what each team member's role requires.
The critical finding is about implicit coordination. When team members share a mental model of the task and of each other's roles, they can anticipate what other members will need and provide it without being asked. A surgical nurse who shares the surgeon's mental model of the procedure hands over instruments before they are requested. A basketball point guard who shares the team's mental model of the play passes to where the forward will be, not where the forward is.
Shared mental models function as a form of shared state that is so deeply internalized it becomes invisible. The team does not feel like it is coordinating. It feels like effortless competence. But the underlying mechanism is the same: multiple agents operating on a common representation of reality, which allows them to produce coherent behavior without explicit step-by-step communication.
For your personal cognitive infrastructure, shared mental models manifest as alignment between your agents' assumptions. When your planning agent and your execution agent share the same model of what "done" means, what priority order looks like, and what constraints apply, they coordinate seamlessly. When they do not — when your planner defines a task one way and your executor interprets it another — you get the internal friction that feels like procrastination but is actually a shared state failure.
The AI parallel: state management in multi-agent systems
If you work with AI systems, shared state is not a metaphor. It is the central design challenge of multi-agent architectures.
In modern multi-agent AI systems — whether built on frameworks like LangGraph, CrewAI, or AutoGen — shared state is the mechanism through which agents coordinate. Google's Agent Development Kit, Microsoft's Azure agent patterns, and Anthropic's multi-agent architectures all converge on the same principle: the environment acts as a common workspace where agents read and write shared state, exchange messages or artifacts, and observe the outcomes of actions.
The engineering patterns that have emerged are instructive. State representation defines what the system knows at any point — task status, intermediate outputs, tool responses, historical decisions. Each agent reads from and writes to specific fields in this shared state. The orchestrator manages access, resolves conflicts, and ensures consistency. Without structured state management, multi-agent systems do not degrade gracefully. They become unstable, produce contradictory outputs, and consume resources resolving conflicts that should never have arisen.
The parallel to human cognitive systems is exact. Your daily state — the sum of what your various cognitive agents know about today's priorities, energy levels, commitments, and constraints — is the equivalent of a multi-agent system's shared context. When that state is well-defined, explicitly maintained, and accessible to all relevant agents, the system coordinates. When it is implicit, fragmented, or inconsistent, every agent makes locally rational decisions that produce globally incoherent behavior.
Research published in 2025 found that agents equipped with awareness of other agents' goals and constraints reduced coordination failures by up to 36 percent in complex collaborative tasks. The mechanism was not better individual reasoning. It was better shared state — each agent had access to information about what other agents were doing and why.
The protocol: designing shared state for your system
Building effective shared state requires four design decisions.
First, define what is shared. Not everything should be. List your active agents and for each one, identify the outputs that other agents need. Your planning agent's priority list is shared. Its internal reasoning about why it ranked items that way probably is not. The principle is minimum viable shared state — share what is necessary for coordination and nothing more.
Second, define the format. Shared state must be legible to every agent that reads it. If your planning agent produces a narrative paragraph but your calendar agent needs structured time blocks, the format mismatch means the state is technically shared but practically useless. Choose a representation that every reading agent can parse without translation.
Third, define the access pattern. For each agent, specify what it reads and what it writes. This prevents the most common shared state failure: two agents writing to the same field with conflicting values. If your energy tracker and your mood tracker both write to a "status" field, you have a write conflict. If the energy tracker writes to "energy_level" and the mood tracker writes to "mood_score," you have clean separation with shared visibility.
Fourth, define the refresh cadence. Shared state that updates in real time creates different coordination dynamics than shared state that updates daily. Your agents do not all need the same freshness. Your calendar agent might need to read today's priorities once each morning. Your energy tracker might need to write after every work block. Match the update frequency to the coordination requirement.
A practical starting point: create a single document — digital or physical — that serves as the shared state for your three most important agents. Give it defined sections, one per agent's output. Review it at the transition points between agents. You have just built the simplest possible shared state layer.
From shared state to communication protocols
Shared state solves the visibility problem. Every relevant agent can see what it needs to see. But visibility alone does not solve the coordination problem entirely. Seeing another agent's output is necessary. Knowing how to interpret it, when to read it, and what to do when it conflicts with your own output — that requires communication protocols, which are the subject of the next lesson.
The progression is deliberate. Lesson L-0505 established that agents can run in parallel or in sequence. This lesson established that parallel or sequential agents need a shared information substrate to coordinate effectively. The next lesson, L-0507, will establish the rules by which agents exchange information through that substrate — the protocols that turn raw shared state into reliable coordination.
Without shared state, your agents are independent processes that happen to live in the same system. With shared state, they become a coordinated system capable of behavior that no individual agent could produce. The intelligence is not in the agents. It is in the connections between them. And those connections are made of shared information, deliberately designed.
Sources:
- Hutchins, E. (1995). Cognition in the Wild. MIT Press.
- Wegner, D. M. (1985). "Transactive Memory: A Contemporary Analysis of the Group Mind." In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior. Springer-Verlag.
- Wegner, D. M., Erber, R., & Raymond, P. (1991). "Transactive Memory in Close Relationships." Journal of Personality and Social Psychology, 61(6), 923-929.
- Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). "Shared Mental Models in Expert Team Decision Making." In N. J. Castellan (Ed.), Individual and Group Decision Making. Lawrence Erlbaum Associates.
- DeChurch, L. A., & Mesmer-Magnus, J. R. (2010). "Measuring Shared Team Mental Models: A Meta-Analysis." Group Dynamics: Theory, Research, and Practice, 14(1), 1-14.
- Google Developers. (2025). "Developer's Guide to Multi-Agent Patterns in ADK." Google Developers Blog.
- Microsoft. (2025). "AI Agent Orchestration Patterns." Azure Architecture Center.