Core Primitive
When team members share the same understanding of the situation they coordinate naturally — without constant explicit communication.
The coordination that requires no words
Janis Cannon-Bowers and Eduardo Salas, working with military teams in the early 1990s, documented a phenomenon that changed how researchers understood team performance. They studied fighter pilot crews, surgical teams, and Navy combat information center teams, and found that the highest-performing teams shared a striking characteristic: they communicated less during peak performance, not more. When the situation was most demanding, the best teams went quiet — not because they stopped coordinating, but because their coordination had become implicit. Each member anticipated what others needed and provided it without being asked (Cannon-Bowers et al., 1993).
Cannon-Bowers and Salas called the mechanism "shared mental models" — overlapping cognitive representations of the task, the team, the situation, and the equipment that allow members to predict each other's behavior and adjust their own accordingly. When two engineers share a mental model of how the deployment pipeline works, they do not need to discuss who handles the database migration and who monitors the health checks. Each knows what the other will do. The coordination is automatic — not because it is unconscious but because the cognitive work was done in advance, when the model was built and aligned.
This lesson examines what shared mental models are, how they form, why they diverge, and how you can deliberately build and maintain them in your teams.
The four models that matter
Research on shared mental models has identified four distinct types, each serving a different coordination function. Teams need all four, but most teams only build one or two explicitly — leaving the others to form accidentally or not at all.
The task model describes what the team is trying to accomplish and how. It includes the sequence of steps, the relationships between subtasks, the priorities when resources are constrained, and the criteria for success. When team members share a task model, they agree on what "done" looks like and what order things should happen in. When they do not, they discover the disagreement at delivery time — the most expensive possible moment (Mathieu et al., 2000).
The team model describes who does what, who knows what, and who has authority over what. It includes role definitions, capability assessments, and the communication patterns that connect roles. Daniel Wegner's transactive memory system — the meta-knowledge of "who knows what" — is a critical component of the team model. When team members share a team model, they route questions to the right person automatically. When they do not, questions go to the loudest person, the most senior person, or nowhere at all (Wegner, 1987).
The situation model describes the current state of the environment — what is happening now, what caused it, and what is likely to happen next. In incident response, the situation model is the shared understanding of the outage: what is broken, what is affected, what has been tried, and what the blast radius might be. When team members share a situation model, they make compatible decisions without consulting each other. When they do not, they work at cross-purposes — one engineer rolling back a change that another is trying to hotfix.
The equipment model describes the tools, systems, and resources the team works with — how they function, what their limitations are, and how they interact. In engineering teams, the equipment model includes the architecture of the system under management, the capabilities and quirks of the deployment tools, and the constraints of the infrastructure. When engineers share an equipment model, they predict system behavior consistently. When they do not, each operates with a different (and often wrong) theory of how the system works.
How models form — and drift
Shared mental models form through three mechanisms, each with different characteristics.
Direct instruction produces models quickly but with limited depth. Onboarding documents, training sessions, and architectural overviews transmit explicit knowledge but often fail to capture the tacit understanding that experienced team members hold. The new engineer reads the deployment guide and builds a model of the deployment pipeline. But the guide does not mention that the staging environment is unreliable on Mondays because of a batch job, or that the senior engineer always checks the cache invalidation logs before declaring a deploy healthy. The explicit model is a skeleton. The full model requires exposure.
Shared experience produces models with depth but slowly. Working together on projects, incidents, and decisions gradually aligns mental models as team members observe each other's behavior, hear each other's reasoning, and discover each other's assumptions. Salas and colleagues found that shared experience is the strongest predictor of shared mental model quality — but it requires time and, crucially, reflection. Two engineers who work together for a year without ever discussing their models may accumulate shared experience without building shared models. The experience creates the raw material. Deliberate conversation builds the model (Salas et al., 2005).
Externalization and calibration produces models deliberately and efficiently. This is the designed approach: the team explicitly articulates its models (in documents, diagrams, or discussions), compares them across members, identifies divergences, and resolves them. The exercise for this lesson uses this approach — interviewing team members independently to surface their individual models, then comparing them to find alignment and divergence.
Models drift for predictable reasons. Personnel changes introduce members whose models were formed elsewhere. System changes invalidate equipment models that were accurate last quarter. Strategy changes shift task models without explicitly updating the team. And time itself erodes models: knowledge that was vivid during a crisis becomes hazy six months later, and the shared understanding that was precise after a team offsite becomes vague as daily pressures crowd it out.
The cost of misaligned models
Kathleen Sutcliffe and Karl Weick, studying high-reliability organizations (aircraft carriers, nuclear power plants, wildfire fighting teams), found that the single most common precursor to organizational failure was not individual error but "a failure of collective mind" — a breakdown in the shared understanding that allows team members to coordinate without explicit communication. In every case they studied, the information needed to prevent the disaster was available somewhere in the system. The collective cognitive architecture failed to surface it, integrate it, and act on it (Weick & Sutcliffe, 2007).
The cost of misaligned models is not always catastrophic. More often, it is chronic and invisible: the meeting that runs thirty minutes long because participants have different models of what the meeting is supposed to accomplish. The code review that produces friction because the reviewer and the author have different models of what "production-ready" means. The sprint that ends with unfinished work because the team had different models of how much effort each story required. Each of these costs is small. Their accumulation is enormous.
One study of software development teams estimated that misaligned mental models account for approximately 30-40% of coordination overhead — the time spent clarifying, renegotiating, and reworking that would be unnecessary if the team's models were aligned (Espinosa et al., 2007). In a team of seven engineers working fifty-hour weeks, that is over one hundred hours per week of collective effort consumed by alignment that should be automatic.
Building shared models deliberately
The most effective teams do not wait for shared mental models to emerge from accumulated experience. They build them deliberately, through practices that make individual models explicit and create structured opportunities for alignment.
Model externalization sessions. Periodically — at project kickoffs, after major changes, or when new members join — the team conducts a session where each member articulates their understanding of one or more of the four model types. The articulations are compared, divergences are identified, and the team converges on a shared model that is documented and made accessible. The documentation is not bureaucratic overhead. It is cognitive infrastructure — the team-level equivalent of the external knowledge systems you built in Phases 3-4.
Pre-briefs and debriefs. Military teams use pre-mission briefs and post-mission debriefs not merely to plan and review but to align and update mental models. The pre-brief asks: "What do we expect to happen, and who will do what?" The debrief asks: "What actually happened, and where did our expectations diverge from reality?" The divergence is the data. It shows where the team's shared models are inaccurate and need updating (Salas et al., 2005).
Simulation and rehearsal. Tabletop exercises — structured walkthroughs of hypothetical scenarios — build shared models by forcing the team to articulate how they would respond to specific situations. A tabletop exercise for an incident response team might walk through a scenario: "The primary database becomes unresponsive at 3 AM on a Saturday. Who gets paged? What are the first three diagnostic steps? Who has authority to fail over to the secondary?" The team's answers reveal their models — and any divergences are resolved before the 3 AM page arrives for real.
Artifacts as model carriers. Architecture diagrams, runbooks, decision trees, and process documents are not just reference materials. They are externalized mental models that serve as alignment anchors. When a new team member reads the architecture diagram, they are not just learning how the system works. They are adopting the team's shared model of how the system works. When the diagram is updated after a change, the team's shared model is updated simultaneously.
The Third Brain
Your AI system can serve as a shared mental model facilitator and auditor. Before a team alignment session, ask each team member to describe their understanding of a process, architecture, or role structure to the AI independently (via text or voice transcript). Then ask the AI to compare the descriptions and produce a "model alignment report" — a summary of where the descriptions converge (shared model), where they diverge (alignment gap), and where they contradict (conflict that needs resolution).
The AI can also maintain an evolving team model document. After each significant team event — a project launch, an incident, a reorganization — share a brief summary with the AI and ask it to update the team's model documentation. Over time, the AI maintains a living representation of the team's cognitive architecture that is more current than any static document could be.
For ongoing monitoring, ask the AI to review meeting notes, decision records, and retrospective outputs for signs of model drift — instances where team members appear to be operating from different assumptions. Early detection of drift prevents the costly misalignment that surfaces only during crises or deadlines.
From shared models to visible thinking
Shared mental models are the invisible infrastructure of team coordination. When they are aligned, coordination is cheap and natural. When they diverge, coordination is expensive and fragile. The difference between a team that flows and a team that grinds is usually not skill or motivation but model alignment — the degree to which team members share the same map of the task, the team, the situation, and the tools.
But shared mental models, by their nature, are invisible. You cannot see a mental model. You can only infer it from behavior — and by the time behavior reveals a misalignment, the cost has already been incurred. The next lesson, Making team thinking visible, addresses this problem directly: making team thinking visible through externalization practices that surface the models, assumptions, and reasoning that normally operate below the threshold of collective awareness.
Sources:
- Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). "Shared Mental Models in Expert Team Decision Making." In N. J. Castellan, Jr. (Ed.), Individual and Group Decision Making. Lawrence Erlbaum.
- Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). "The Influence of Shared Mental Models on Team Process and Performance." Journal of Applied Psychology, 85(2), 273-283.
- Wegner, D. M. (1987). "Transactive Memory: A Contemporary Analysis of the Group Mind." In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior. Springer.
- Salas, E., Sims, D. E., & Burke, C. S. (2005). "Is There a 'Big Five' in Teamwork?" Small Group Research, 36(5), 555-599.
- Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in an Age of Uncertainty (2nd ed.). Jossey-Bass.
- Espinosa, J. A., Slaughter, S. A., Kraut, R. E., & Herbsleb, J. D. (2007). "Familiarity, Complexity, and Team Performance in Geographically Distributed Software Development." Organization Science, 18(4), 613-630.
Frequently Asked Questions