Four questions for 30-minute coordination reviews: who produced, did output reach consumers, what's the coordination ratio, where did agents interfere?
During 30-minute coordination reviews, answer four diagnostic questions with evidence: which agents produced output, did outputs reach intended consumers, what was the coordination-to-work time ratio, and where did agents actively interfere.
Why This Is a Rule
Unstructured coordination reviews drift into anecdotes: "Things felt pretty good this week." Structured reviews with specific diagnostic questions produce actionable findings. The four questions cover the four failure modes of multi-agent coordination systems, each answered with evidence rather than impression.
Which agents produced output? identifies dead agents — those installed but not firing. Dead agents clutter the system and consume mental modeling effort without producing value. Did outputs reach intended consumers? identifies handoff failures (Specify the information contract at every agent handoff — what exact output does the next step need from the current one?) — the most common and most costly coordination failure, where work is produced but never reaches the process that needs it. What was the coordination-to-work ratio? tracks coordination overhead against the budget (Cap coordination at 15-25% of total hours — new coordination mechanisms must fit within budget or displace existing ones). If the ratio is climbing, the system is becoming coordination-heavy. Where did agents actively interfere? identifies negative interactions (Assess agent ecosystem health by checking three pair-level failures: conflicting outputs, throughput mismatches, and resource competition) where agents are making each other worse rather than working independently.
The 30-minute time box prevents review bloat: enough time for evidence-based answers to four questions, not enough to devolve into unfocused discussion.
When This Fires
- During scheduled coordination reviews (weekly or biweekly for active multi-agent systems)
- When the system "isn't producing" but you can't pinpoint why
- When coordination overhead feels high but you need data rather than impressions
- Complements Agent system review is essential maintenance, not optional improvement — ask: did it fire? Did it work? Has context changed? (agent system maintenance) with the specific review protocol
Common Failure Mode
Answering questions with impressions rather than evidence: "I think most agents produced output" (impression) vs. "Writing agent: 4 sessions, 2,100 words. Review agent: 0 sessions — didn't fire. Exercise agent: 5 sessions." (evidence). Impressions are systematically biased toward the agents that produced the most recent or most visible output, missing the dead and failing agents that need attention most.
The Protocol
(1) Schedule 30-minute coordination reviews at a regular cadence. (2) Answer four questions with evidence: Q1: Which agents produced output? List each agent and its output for the period. Flag agents with zero output. Q2: Did outputs reach consumers? For each output-producing agent, verify its output was consumed by the intended downstream process. Flag handoff failures. Q3: Coordination-to-work ratio? Sum time spent on coordination (handoffs, sync, deliberation between agents) and divide by total productive time. Compare to budget (Cap coordination at 15-25% of total hours — new coordination mechanisms must fit within budget or displace existing ones). Q4: Where did agents interfere? Identify instances where one agent's operation actively degraded another's performance. (3) For each finding: assign a specific action (revive, fix handoff, reduce coordination, resolve interference). (4) Track findings across reviews to identify trends.