Define non-overlapping agent scopes — scope collisions are architecture problems, not willpower failures
Design cognitive agents with non-overlapping scopes by defining exactly which situations, resources, or decisions each agent claims authority over, treating scope collisions as architecture problems requiring boundary redefinition rather than willpower failures.
Why This Is a Rule
When two behavioral agents claim authority over the same situation, you experience internal conflict — the feeling of "I don't know what I should do right now" when both "deep work from 9-11" and "respond to messages within 2 hours" apply to a 10 AM message. This conflict is typically experienced as a willpower problem ("I can't decide"), but it's actually an architecture problem: two agents have overlapping scopes, and the system has no resolution mechanism.
Resolve inter-agent conflicts with documented priority hierarchies — case-by-case deliberation defeats the purpose of automation addresses inter-agent conflicts with priority hierarchies. This rule goes upstream: prevent conflicts by designing non-overlapping scopes from the start. If "deep work" owns 9-11 and "message response" owns all other working hours, there's no collision — each moment belongs to exactly one agent's jurisdiction.
The scope definition must be explicit: which situations, resources, or decisions does each agent claim authority over? "My exercise agent owns health-related physical activity decisions" is too vague — does it own the decision to take stairs vs. elevator? Does it own the decision to walk to a meeting vs. drive? Explicit scope prevents ambiguity: "My exercise agent owns scheduled workout sessions. Incidental physical activity is not in scope."
When This Fires
- When designing a multi-agent behavioral system with 5+ active agents
- When you experience recurring "what should I do right now?" conflicts between agents
- When adding a new agent to an existing system — verify its scope doesn't overlap with existing agents
- When Resolve inter-agent conflicts with documented priority hierarchies — case-by-case deliberation defeats the purpose of automation's priority hierarchy keeps getting invoked — the underlying scopes need separation
Common Failure Mode
Defining agent scopes by topic instead of by situation: "My health agent handles health things." What does "health things" mean at 2 PM on a Tuesday when you're at your desk? Health agent says stand up; productivity agent says finish the task. The scopes overlap because "health things" and "productivity things" aren't mutually exclusive situations. Define by concrete situations: "Health agent owns: scheduled workout sessions, food preparation decisions, and the 2:30 PM stretch break. Productivity agent owns: all desk-work time between scheduled breaks."
The Protocol
(1) List all active agents. (2) For each agent, write its explicit scope: "This agent has authority over [specific situations/decisions/resources]." (3) Check for overlaps: are there any situations where two agents both claim authority? (4) For each overlap: redraw boundaries so each situation belongs to exactly one agent. Techniques: temporal partitioning (agent A owns mornings, agent B owns afternoons), situational partitioning (agent A owns desk work, agent B owns physical spaces), or resource partitioning (agent A owns time allocation, agent B owns energy allocation). (5) When a new agent is added, verify its proposed scope doesn't overlap with existing agents before installation. (6) Treat any experienced "conflict between agents" as a scope overlap diagnosis — fix the boundaries, don't try harder to choose between conflicting agents.