Every agent you've ever built has three parts
You already use agents. You use them every morning when your alarm goes off and you check whether it is a weekday before getting out of bed. You use them when a Slack notification appears and you glance at the channel name before deciding whether to open it. You use them when someone asks "do you have a minute?" and you assess your current workload before answering.
Each of these follows the same structure, whether you have made it explicit or not. There is a trigger that activates it — a situational cue in the environment. There is a condition that validates it — a check that determines whether the response is appropriate right now. And there is an action it takes — the concrete behavior that follows.
This three-part anatomy is not a productivity hack. It is the fundamental structure of every cognitive agent, from the simplest habit loop in your morning routine to the most sophisticated production rule in a cognitive architecture. Once you see it, you cannot unsee it — and once you can decompose any behavior into these three components, you gain the ability to design, debug, and replace the agents that run your life.
The research: implementation intentions
Peter Gollwitzer introduced the concept of implementation intentions in 1999, and the evidence behind them is among the most robust in behavioral psychology. An implementation intention takes the form: "When situation X arises, I will perform response Y." It is not a goal ("I want to exercise more"). It is a specification of the exact trigger-action link that turns the goal into behavior.
A meta-analysis by Gollwitzer and Sheeran (2006) examined 94 studies involving 8,461 participants and found that forming implementation intentions produced a medium-to-large effect size of d = 0.65 on goal completion rates. That is not a marginal improvement. It means people who specify their triggers and actions are reliably and substantially more likely to follow through than people who merely hold the intention.
But Gollwitzer's original formulation is a two-part structure: when X, then Y. The three-part model — trigger, condition, action — adds the validation layer that makes agents robust. Consider the difference:
- Two-part: "When I feel stressed, I will take a walk."
- Three-part: "When I feel stressed, if I have at least 15 minutes before my next commitment, I will take a walk."
The condition is what prevents an agent from firing in the wrong context. Without it, your stress-walk agent activates five minutes before a client presentation, and now you have a different problem. The condition is the gate that turns a reflex into a judgment.
The cognitive architecture: production systems
This three-part structure is not just a behavioral design tool. It is how cognitive scientists model the fundamental unit of procedural knowledge in the human mind.
John Anderson's ACT-R (Adaptive Control of Thought-Rational), developed at Carnegie Mellon University beginning in the 1980s, is one of the most influential cognitive architectures in psychology. At its core, ACT-R models all procedural knowledge — everything you know how to do — as production rules: IF-THEN condition-action pairs stored in procedural memory. At each cognitive cycle, a pattern matcher scans the current state of the system's buffers and selects a production rule whose conditions match. That rule fires, executing its action, which modifies the system's state and sets up the next cycle.
This is not a metaphor for how thinking works. It is a computational model that has successfully predicted human behavior across hundreds of experiments — from arithmetic to language learning to driving. The production rule is the atomic unit of cognitive skill. And every production rule has the same anatomy: a condition that must be satisfied and an action that executes when it is.
What ACT-R adds to the behavioral picture is the concept of conflict resolution — when multiple production rules match the current situation, the architecture must select one. This is why your agents need well-specified conditions. Vague conditions mean multiple agents match the same trigger, and your cognitive system has to waste resources resolving the conflict. Sharp conditions mean the right agent fires cleanly.
The behavior design parallel: Fogg's model
BJ Fogg, founder of the Behavior Design Lab at Stanford, arrived at a convergent framework from a different direction. The Fogg Behavior Model (B = MAP) states that behavior occurs when three elements converge simultaneously: Motivation, Ability, and a Prompt.
The prompt is the trigger — the cue in the environment that initiates the behavior. Without it, nothing happens regardless of how motivated or capable you are. Fogg identifies three types of prompts: sparks (which boost motivation), facilitators (which increase ability), and signals (which simply remind). But the key insight is the same: without a specific, identifiable cue, behavior does not reliably occur.
Where Fogg's model maps onto the three-component agent structure is in the convergence requirement. Motivation and ability together function as the condition — the validation check. A prompt (trigger) fires, but the behavior only occurs if motivation and ability are both above the activation threshold at that moment. This is why the same trigger produces different outcomes on different days. Your phone buzzes (trigger), but whether you check it depends on whether you are in a meeting (ability constraint) and whether you are expecting an important message (motivation). The condition is doing the filtering.
The practical implication: when an agent you designed is not firing, diagnose which component failed. The trigger might not be salient enough — you never encounter the cue. The condition might be too restrictive — valid situations get filtered out. Or the action might be too costly — it demands more effort than you have available. Each failure has a different fix.
The software parallel: event-driven architectures
If you work in software, you already think in trigger-condition-action structures. Event-driven architecture (EDA) — one of the dominant patterns in modern distributed systems — models every system behavior as a response to events. A producer emits an event (trigger), a consumer evaluates whether the event matches its subscription and filtering criteria (condition), and then executes a handler (action).
The pattern is even more explicit in rule engines and business process automation, where rules are literally written as Event-Condition-Action (ECA) triples. "When a support ticket is submitted (event), if it is marked urgent and contains the keyword 'outage' (condition), then escalate to Tier 2 and notify the incident response team (action)."
Agentic AI systems follow the same anatomy. An AI agent operating under a tool-use pattern receives an input (trigger), applies reasoning to determine whether action is warranted and which tool to invoke (condition), and then executes the tool call (action). The Model Context Protocol (MCP) and similar frameworks standardize this: the agent does not act on every input, and it does not choose tools randomly. It evaluates conditions against its current context before committing to an action. The three-part structure scales from a single habit to a multi-agent orchestration system because it captures something fundamental about how responsive behavior works at any level of complexity.
Why all three components must be explicit
Most of your current agents — the ones running your daily behavior right now — have at least one implicit component. That is where they break.
Implicit triggers mean the agent activates inconsistently. You have a rule about reviewing your task list, but you never specified when. So it fires when you happen to remember, which is to say, it mostly does not fire. Gollwitzer's research shows that specifying the situational cue — the exact when and where — is what delegates the activation from conscious memory to environmental detection. The trigger has to be concrete and external: "when I sit down at my desk after lunch," not "when I have time."
Implicit conditions mean the agent fires indiscriminately. You check your phone every time it buzzes because you never specified the condition under which checking is actually warranted. The result is an agent that responds to every notification identically — a reflex, not a judgment. Adding an explicit condition ("if I am not in a conversation with another person") turns an involuntary reaction into a deliberate policy.
Implicit actions mean the agent's response is vague. "Deal with email" is not an action. "Move the message to the Review Friday folder" is. "Be more focused" is not an action. "Close all browser tabs except the one I'm working in" is. The action must be concrete enough that you would know whether you did it. ACT-R's production rules work precisely because the action component modifies the system's state in a specific, observable way. Your personal agents need the same specificity.
Decomposing your default agents
In the previous lesson, you learned that every deliberate agent replaces an unconscious default. Now you have the tool to reverse-engineer those defaults. Take any recurring behavior — especially one that frustrates you — and decompose it:
-
What triggers it? Identify the situational cue. It is almost always environmental: a notification sound, a time of day, another person's facial expression, a physical sensation. If you cannot name the trigger, you cannot redesign the agent.
-
What condition validates it? This is the hardest to identify because default agents often have no condition at all — they fire on every instance of the trigger. When you feel criticized (trigger), you get defensive (action) with no intervening validation check. The absence of a condition is the diagnosis.
-
What action does it take? Name the specific behavior. Not the emotional state it produces, not the category it belongs to — the observable action. "I interrupt the other person and explain why they're wrong" is an action. "I get upset" is a feeling, not a behavior.
Once decomposed, you can redesign any component independently. Keep the trigger but add a condition. Keep the condition but change the action. Replace the trigger entirely. This is agent engineering at the most personal level — and it is the same operation whether you are refactoring a habit, a team process, or a software system.
The bridge to decision fatigue
Here is why this anatomy matters beyond a single agent: every agent you build with explicit trigger-condition-action structure is a decision you no longer have to make in real time. The next lesson examines this consequence directly — how well-designed agents conserve the cognitive resources you need for novel problems.
But the conservation only works if the three components are specified tightly enough that the agent can run without your conscious intervention. A vague trigger requires you to remember. A missing condition requires you to evaluate on the fly. An ambiguous action requires you to deliberate. Each implicit component returns the decision to your active cognitive workspace, consuming the very resources the agent was supposed to protect.
The goal is not to automate everything. It is to automate the recurring, predictable patterns so that your deliberate attention is available for the situations that actually require it. And that starts with learning to see — and specify — the three components that make any agent work.