Most of your decisions are reruns
You face thousands of decisions per week. The vast majority are not new. They are the same decision wearing a different costume: Should I attend this meeting? Should I buy this tool or build it? Should I accept this invitation? Should I invest in this opportunity or save the resources? Should I respond now or batch it for later?
Each time one of these situations arrives, you treat it as novel. You weigh the options. You consult your feelings. You deliberate. And two hours later, you reach roughly the same conclusion you reached last time — because the underlying criteria have not changed. Your priorities are the same. Your constraints are the same. The situation is structurally identical to the last six times you faced it. You just forgot, or could not access, the reasoning you used before.
This is not thoughtful decision-making. It is cognitive waste. You are paying the full cost of deliberation for decisions whose outcomes are already determined by criteria you already hold. You just have not written those criteria down.
A decision agent is the fix. It is a pre-designed system — trigger, criteria, action — that activates automatically when a recurring decision type appears. You build it once, during a period of clear thinking, and it executes every time the situation recurs. The decision was made when you designed the agent. The situation merely activates it.
The cognitive cost of re-deciding
Herbert Simon won the Nobel Prize in Economics in 1978 for demonstrating that human beings are not utility maximizers. They are bounded rational agents — constrained by limited information, limited computational capacity, and limited time. His concept of satisficing — choosing the first option that meets a threshold of acceptability rather than exhaustively searching for the optimal one — was not a description of lazy thinking. It was a description of rational behavior under real-world constraints (Simon, 1956).
Barry Schwartz extended this work in his 2002 study with colleagues at Swarthmore College, measuring the psychological consequences of maximizing versus satisficing across seven independent samples. Maximizers — people who compulsively seek the best possible option — reported lower happiness, lower life satisfaction, lower optimism, and higher depression than satisficers. In a study of recent college graduates, maximizers landed jobs with starting salaries 20% higher than satisficers. They were also significantly less satisfied with those jobs (Schwartz et al., 2002). The exhaustive search for the optimal answer produced a better outcome on paper and a worse experience of that outcome.
The lesson for recurring decisions is direct: if you re-maximize every time you face a familiar decision type, you pay the full psychological cost of maximization — the comparison, the regret, the second-guessing — on a decision whose parameters have not materially changed. You are not gaining new information. You are just burning cognitive resources.
Decision agents convert you from a maximizer into a satisficer — but a principled one. You maximized once, when you designed the agent. Every subsequent activation is satisficing: does this situation meet the criteria? Yes or no. Move on.
The anatomy of a decision agent
A decision agent has four components:
The trigger. This is the situation class that activates the agent. Not "should I attend the meeting with marketing on Thursday" — that is an instance. The trigger is "a meeting invitation arrives." You need to define the category broadly enough to capture the recurring pattern but narrowly enough that the criteria actually apply.
The criteria. These are the pre-set conditions that determine the outcome. They take the form of a checklist, a decision matrix, or a set of weighted factors. For a meeting-acceptance agent, criteria might include: Does the agenda exist? Am I a decision-maker or just an audience member? Can the outcome be achieved asynchronously? Three no's and you decline. For a buy-versus-build agent: Is this a core competency? Is the timeline under 30 days? Is the maintenance cost under 10% of build cost annually? Criteria must be specific enough that applying them produces a clear answer, not another round of deliberation.
The action. What you do when the criteria produce a verdict. This is not just "accept" or "decline" — it includes the execution details. Decline the meeting and send a one-line explanation. Accept the freelance gig and send the standard rate sheet. Buy the tool and cancel the evaluation after 30 days if adoption is below threshold. The action should be concrete enough that you can execute it without additional decision-making.
The override condition. This is critical. Every decision agent needs an explicit escape hatch — a defined set of circumstances under which the agent is suspended and you return to full deliberation. If the meeting invitation comes from the CEO, your normal meeting-acceptance agent might not apply. If the buy-versus-build decision involves a technology you have never encountered before, the familiar criteria might be miscalibrated. The override condition prevents the most dangerous failure mode of decision agents: applying routine logic to non-routine situations.
Fast and frugal: why simple beats sophisticated
Your instinct will be to build elaborate decision agents — multi-variable weighted matrices with conditional branches and exception handling. Resist this instinct. The research says simple wins.
Gerd Gigerenzer and his colleagues at the Max Planck Institute spent decades studying what they call fast-and-frugal heuristics — simple decision rules that use minimal information and minimal computation yet match or outperform complex models under real-world conditions (Gigerenzer & Gaissmaier, 2011). Their most striking demonstration: in emergency rooms, a three-question decision tree for triaging heart attack patients — Is the main electrocardiogram reading abnormal? Is there chest pain? Are any of five other specific factors present? — outperformed a 19-variable logistic regression model that had been the clinical standard. The simple tree was faster, easier to use, and more accurate.
The mechanism behind this result is overfitting. Complex models fit perfectly to the conditions they were designed for, but those conditions never recur exactly. Simple models ignore noise and capture only the variables that genuinely matter, making them more robust across variable conditions. Your decision agents face the same challenge: the next instance of "should I accept this freelance project" will not be identical to the one you designed the agent for. A three-criteria checklist is more robust across variations than a ten-factor weighted model because it captures the signal and ignores the noise.
Design your decision agents with the minimum number of criteria that consistently produce good outcomes. Three to five is usually enough. If you need more than seven, you are probably dealing with a decision type that is not truly recurring — or you are conflating multiple decision types into one agent.
The Ulysses contract: binding your future self
The philosopher Jon Elster formalized a pattern in his 1979 work Ulysses and the Sirens that directly supports the decision agent architecture. Ulysses knew that his future self — hearing the Sirens' song — would make a decision his present self considered irrational. So his present self imposed a binding constraint: tie me to the mast and do not untie me regardless of what I say. The decision about how to handle the Sirens was made in advance, during a period of clear judgment, and made binding precisely because the future decision-making context would be compromised.
A decision agent is a Ulysses contract. When you design it — calmly, with full information, away from the pressure of the immediate situation — you are your most rational self. When the decision situation actually arrives, you are often stressed, time-pressed, emotionally activated, or simply tired. The agent protects you from the degraded judgment of your in-the-moment self by substituting the clear judgment of your design-time self.
This is not hypothetical. Behavioral economics research on commitment devices consistently shows that people who pre-commit to decision criteria outperform those who decide in the moment, specifically because in-the-moment decision-making is systematically distorted by present bias, loss aversion, and emotional arousal. Retirement savings plans that auto-enroll participants — a pre-committed decision to save — dramatically outperform plans that require active enrollment, not because people do not want to save but because the moment of decision is precisely the moment when short-term impulses are strongest.
Your decision agents work the same way. You are not removing your capacity to decide. You are protecting that capacity from the conditions under which it performs worst.
Decision agents in practice
Here are five recurring decision types that benefit from pre-designed agents:
Accept versus decline. Invitations to meetings, events, projects, and collaborations. Criteria: Does it advance a stated priority? Is the time cost proportional to the expected value? Can I contribute something no one else present can? Ray Dalio built this pattern into Bridgewater's operating system: when you recognize a situation as "another one of those," you refer to your principles as a shortcut to good decision-making rather than re-deriving the answer from scratch (Dalio, 2017).
Buy versus build. Tools, systems, processes. Criteria: Is this a core competency where custom design creates competitive advantage? Is the available option within 80% of what a custom solution would deliver? Is the maintenance burden of building acceptable over the next two years? Organizations that use structured decision frameworks for buy-versus-build decisions achieve measurably better alignment with business objectives than those that decide ad hoc.
Invest versus save. Time, money, attention. Criteria: What is the expected return relative to the resource spent? Is the downside survivable if the investment fails? Does the opportunity have a time constraint that prevents deferral?
Respond versus defer. Messages, requests, tasks. Criteria: Does the requester need a response to unblock their work? Will the context degrade if I wait? Is there a batching opportunity where responding later is actually more efficient?
Continue versus quit. Projects, commitments, habits. This is the hardest decision type to automate because of the sunk cost fallacy — the irrational tendency to continue investing in something because of what you have already invested rather than because of what you expect to gain. A good continue-versus-quit agent explicitly excludes past investment from its criteria and evaluates only future expected value. Would you start this project today, knowing what you now know? If no, stop.
The AI parallel: decision support systems
Everything you are building as a personal decision agent has a direct parallel in computational systems. Recommendation engines, decision support systems, and multi-armed bandit algorithms all solve the same fundamental problem: how to make good recurring decisions efficiently.
The multi-armed bandit problem — named after a gambler facing a row of slot machines with unknown payout rates — is the computational version of your explore-versus-exploit dilemma. Thompson sampling, first described in 1933 and now widely deployed in online advertising, clinical trials, and product recommendations, solves it by maintaining a probabilistic model of each option's value and choosing actions that balance learning (exploration) with performance (exploitation). The algorithm does not re-derive the decision from scratch each time. It maintains a running model — a decision agent — that updates incrementally with each new observation.
The structural insight transfers directly. Your personal decision agents should not be static. They should update. After each activation, you have new data: did the criteria produce a good outcome? If the meeting you declined turned out to contain information you needed, your meeting-acceptance agent needs a criteria adjustment. If the tool you bought instead of building turned out to miss a critical requirement, your buy-versus-build threshold needs recalibration. The agent is not a permanent law. It is a living heuristic — one that improves with each use, just like a bandit algorithm improves its model with each pull.
The failure you must anticipate
The most dangerous failure mode of decision agents is not making a bad call on a routine decision. It is applying a routine agent to a non-routine situation and not noticing.
Gigerenzer's research shows that fast-and-frugal heuristics outperform complex models under uncertainty — but only when the heuristic matches the structure of the environment. When the structure changes, the heuristic fails. A three-criteria checklist for accepting freelance work is excellent when the work is similar to work you have done before. It is dangerous when the inquiry represents a fundamentally different kind of opportunity — one where your existing criteria are miscalibrated because they were never designed for this category.
This is why override conditions are not optional. Every decision agent must include an explicit answer to the question: under what conditions do I suspend this agent and return to full deliberation? The conditions should be specific: if the financial magnitude exceeds a threshold, if the commitment duration exceeds a threshold, if the domain is unfamiliar, if your emotional state is extreme in either direction. When an override condition triggers, you do not ignore the agent. You acknowledge that the situation has left the agent's jurisdiction and requires the kind of careful, slow, System 2 reasoning that the agent was designed to replace for routine cases.
From ad hoc to infrastructure
Most people treat every decision as a fresh problem. They pay the full cognitive cost every time: gathering information, weighing options, managing anxiety, second-guessing the outcome. The result is what psychologists call decision fatigue — the progressive degradation of decision quality over the course of a day as your cognitive resources deplete.
Decision agents transform this. They convert your most common decision types from expensive, effortful, one-off deliberations into cheap, automatic, pre-committed responses. You still think. But you think once — at design time — and execute many times. The cognitive savings compound: fewer depleted decisions means more cognitive resources available for the genuinely novel situations that actually require deliberation.
This is the shift from ad hoc decision-making to decision infrastructure. You are not building a system that decides for you. You are building a system that decides the way you would decide if you had unlimited time, full information, and no emotional pressure — and then activating that system precisely when you have none of those things.
The prerequisite, as always, is externalization. A decision agent that lives only in your head is not an agent — it is a vague intention, subject to the same distortions and forgetting that made the recurring decision costly in the first place. Write the trigger. Write the criteria. Write the action. Write the override. Put it where you will see it when the trigger fires.
You have been making these decisions all along. Now make them once, make them well, and let the agent handle the rest.