Update decision agents after every activation based on outcome — treat agents as living heuristics, not permanent laws
After each decision agent activation, update the agent's criteria based on whether they produced a good outcome, treating the agent as a living heuristic that improves with each use rather than a permanent law.
Why This Is a Rule
A decision agent that never updates is a fossilized heuristic — it applies yesterday's pattern to today's context without accounting for what it learned from each application. Treating agents as permanent laws ("My rule is to always...") ignores that every activation produces outcome data that should refine the rule. The best decision agents are living heuristics: they improve with each use because each use provides a natural experiment.
Machine learning systems improve through exactly this mechanism: each prediction produces an outcome that feeds back to update the model's parameters. Personal decision agents should work the same way. After each activation, ask: "Did the agent's recommendation produce a good outcome?" If yes, the current criteria are validated for this context. If no, the criteria need adjustment — perhaps a condition was too broad, or a criterion was weighted wrong, or an edge case wasn't covered.
The framing matters: "living heuristic" creates permission to update, while "permanent law" creates resistance to change. If you treat "never accept a meeting without a clear agenda" as a permanent law, you'll miss the relationship-building meeting that had no agenda but produced a crucial alliance. If you treat it as a heuristic, you note the exception and add a condition: "unless the meeting is with a potential key collaborator."
When This Fires
- After every decision agent activation, regardless of whether the outcome was good or bad
- During weekly reviews when processing accumulated agent activations
- When an agent produces an unexpectedly bad outcome — immediate update opportunity
- When an agent produces an unexpectedly good outcome — opportunity to reinforce and broaden
Common Failure Mode
Only updating after failures: "The agent worked fine, no need to review." Successes are equally informative — they confirm which conditions the agent handles well, which expands your confidence in its applicability. And occasionally, a success reveals that the agent produced a good outcome for the wrong reasons — which means it will fail when circumstances shift.
The Protocol
(1) After each decision agent activation, note the outcome: did the decision produce good results? (2) If good outcome → reinforce: the criteria worked for this context. Note what context features mattered. (3) If bad outcome → diagnose: which criterion was wrong? Was it the trigger scope, the conditions, or the decision logic? (4) Update the agent: add the new condition, adjust the criterion weight, or narrow/broaden the scope based on the outcome. (5) Document the update in the agent's documentation (Document every agent with five components: Name, Trigger, Conditions, Actions, Success Criteria — undocumented agents degrade silently) so you can trace the agent's evolution over time. Each update makes the agent a slightly better model of your decision domain.