You already know when to quit. You just haven't written it down.
Every experienced decision-maker has lived this story. You start a project with energy and conviction. Months in, the evidence turns. Costs are higher than projected. Timelines are slipping. The market signal you were counting on hasn't materialized. But you keep going — because you've already invested so much, because quitting feels like failure, and because surely, surely, the next quarter will be different.
It won't be. You know this. But you don't stop, because stopping requires a decision under emotional duress — and humans are catastrophically bad at making abandonment decisions in the moment.
Kill criteria solve this problem by moving the quit decision to the only time you can make it rationally: before you start.
A kill criterion is a specific, pre-committed condition under which you will abandon a course of action — defined when you're calm, clear-eyed, and not yet invested. It is the epistemic equivalent of a circuit breaker: a mechanism that trips automatically when conditions exceed safe thresholds, regardless of how you feel when it fires.
Why you can't decide to quit in the moment
The research on why humans fail at abandonment decisions is extensive and damning.
Escalation of commitment. In 1976, Barry Staw published "Knee-Deep in the Big Muddy," a landmark study in which 240 business school students made resource allocation decisions about a failing investment. Staw found that participants who were personally responsible for the initial decision committed significantly more resources to the failing course of action than those who inherited someone else's decision. The mechanism was self-justification: admitting the project should be killed meant admitting the original decision was wrong, and the ego will burn extraordinary amounts of money to avoid that admission (Staw, 1976). The effect has been replicated in at least eight subsequent experiments. It is one of the most robust findings in organizational behavior.
The sunk cost fallacy. Arkes and Blumer's 1985 study, "The Psychology of Sunk Cost," demonstrated the effect with uncomfortable clarity. In their radar-blank airplane scenario, 85% of participants chose to continue funding a doomed military project when told about prior investment — compared to only 10% who would fund the same project described without sunk cost information. In a field experiment, theater subscribers who paid full price for season tickets attended significantly more performances than those who received a discount, despite both groups having identical access. The prior expenditure — rationally irrelevant to the future value of attendance — drove behavior because abandoning the investment felt like waste (Arkes & Blumer, 1985).
Loss aversion. Kahneman and Tversky's prospect theory (1979) explains the deeper mechanism. Losses are psychologically weighted roughly twice as heavily as equivalent gains. When you contemplate killing a project, your brain frames it as accepting a certain, visible loss — the money spent, the time invested, the public commitment made. Continuing, by contrast, preserves the possibility of gain, however unlikely. Your emotional system will reliably choose uncertain hope over certain loss. This is not a character flaw. It is the architecture of human cognition. And it means that in-the-moment quit decisions are systematically biased toward continuation.
The combined weight of these three forces — self-justification, sunk cost sensitivity, and loss aversion — means that by the time you feel like quitting, you're already months or years past the point where quitting was rational. The decision has to be made before the emotional load makes it impossible.
The pre-mortem: imagining failure before it happens
Gary Klein's pre-mortem technique, published in the Harvard Business Review in 2007, provides the cognitive framework for generating kill criteria. The method is based on research by Mitchell, Russo, and Pennington (1989), who found that prospective hindsight — imagining that an event has already occurred — increases the ability to correctly identify reasons for future outcomes by 30%.
Here's how a pre-mortem works: before launching a project, you gather the team and say, "Imagine it's six months from now and this project has failed completely. Write down why." The temporal shift — from "what might go wrong" to "what did go wrong" — unlocks a different mode of reasoning. People generate failure scenarios that are more specific, more plausible, and more actionable than those produced by traditional risk assessment.
Kill criteria are what you extract from a pre-mortem. Each plausible failure scenario implies a measurable indicator:
- "We failed because we couldn't acquire customers cheaply enough" becomes a kill criterion: If customer acquisition cost exceeds $X by date Y, we stop.
- "We failed because the core technology didn't perform at scale" becomes: If latency exceeds Z milliseconds at 10,000 concurrent users by Q3, we stop.
- "We failed because we couldn't hire the senior engineer we needed" becomes: If the role is unfilled after 90 days of active search, we reassess the project's viability.
The pre-mortem gives you the scenarios. Kill criteria give you the tripwires. The combination transforms "we'll know it when we see it" into "we defined it before we started."
The stop-loss: borrowed from finance, applicable everywhere
Professional traders understood this principle long before organizational psychologists named it. A stop-loss order is a predetermined price at which you automatically sell a position — set when you enter the trade, executed by the system when the threshold is hit, with no human in the loop at the moment of decision. The person entering the trade is calmer and more rational than the person watching their position decline in real time. The stop-loss removes the decision from the moment when loss aversion, hope, and ego would distort it.
The principle transfers directly to non-financial decisions:
- Career: "If I haven't received a promotion or a meaningful scope increase within 18 months, I start interviewing." Set when you accept the job, not when you're frustrated at month 16.
- Relationships: "If we haven't resolved this recurring conflict after three honest conversations and one round of counseling, we need to discuss separation." Set when both parties are calm, not during the fourth argument about the same issue.
- Strategy: "If this content channel hasn't generated 500 organic visitors per month within six months of consistent publishing, we redirect effort to a different channel." Set during strategy planning, not when month five numbers come in at 120.
The stop-loss separates the analysis of what constitutes failure from the experience of failure. Kill criteria ensure the analysis governs the outcome.
What makes a kill criterion actually work
Not all kill criteria are useful. Vague criteria ("if things aren't going well") never trigger because they require a subjective judgment at the very moment when judgment is compromised. Extreme criteria ("if we lose every single customer") are functionally decorative — they describe scenarios so catastrophic that they'll never be the first sign of failure.
Effective kill criteria share four properties:
Specific. They name a measurable quantity — a number, a date, a binary condition. "Revenue below $50K per month" is a kill criterion. "Revenue is disappointing" is not.
Time-bound. They specify when evaluation happens. Without a deadline, you can always argue that success is just around the corner. "By March 31" eliminates that escape route.
Pre-committed. They are written down and visible to at least one other person before the project begins. A kill criterion that exists only in your head is a kill criterion you'll renegotiate with yourself. External accountability — a co-founder, an advisor, a written document — makes the criterion binding.
In the uncomfortable middle. Too easy to trigger and you'll kill viable projects prematurely. Too hard to trigger and the criteria are theater. The right threshold is the one that makes you slightly uncomfortable when you set it — because you can imagine it actually firing, and that possibility creates precisely the tension that makes the criterion useful.
The renegotiation trap
The most common failure mode with kill criteria is renegotiation at the moment of truth. The criteria fire. The evidence says stop. And then you say: "But the market shifted," or "We just need one more quarter," or "The criteria didn't account for this new variable."
Sometimes these adjustments are legitimate. Conditions genuinely change. But you need a protocol for distinguishing legitimate reassessment from emotional self-justification — because they feel identical from the inside.
One approach: the two-person rule. Before overriding a kill criterion, you must convince at least one disinterested party — someone who doesn't share your emotional investment — that the override is justified by new evidence, not by the desire to continue. If you can't articulate the new evidence clearly enough to persuade someone who has no stake in the outcome, you're rationalizing.
Another approach: the pre-registered override. When you set the original criteria, also specify the only conditions under which they can be revised. "These criteria can be updated if a regulatory change fundamentally alters the competitive landscape." This narrows the renegotiation window before the emotional pressure exists.
The AI parallel: early stopping and circuit breakers
Machine learning engineers face an identical problem when training neural networks. As a model trains, its performance on the training set continues to improve — it keeps "investing" in the patterns it's learning. But at some point, it starts memorizing noise rather than learning signal. Performance on unseen data degrades. The model is overfitting: doubling down on a course of action that is no longer producing genuine returns.
The solution is early stopping — a predetermined criterion that halts training when validation loss stops improving for a specified number of epochs (the "patience" parameter). The model doesn't decide to stop. The criterion decides. And the model reverts to its best-performing checkpoint — the point before it started escalating its commitment to noise.
The architecture maps directly to human kill criteria: define the metric, set the threshold, automate the enforcement, remove the agent from the decision loop at execution time. AI safety researchers have extended the same principle to circuit breakers — mechanisms that trigger automatic shutdown when system outputs exceed predetermined safety thresholds. The 2025 International AI Safety Report identifies kill switches as critical safety primitives precisely because they don't rely on the system's own judgment about whether to continue.
The lesson for human decision-making is the same: the entity whose behavior needs to be constrained cannot be trusted to constrain itself in the moment. The criteria must be external, predetermined, and enforced by something other than the agent's real-time emotional state.
Kill criteria as epistemic infrastructure
Kill criteria are not pessimism. They are not a lack of commitment. They are the opposite: they are a commitment so rigorous that it includes the conditions of its own termination.
Setting kill criteria requires you to do the hardest epistemic work in advance: specify what failure looks like before you're motivated to deny it. This is the same discipline that drives scientific hypothesis testing (specify falsification conditions before running the experiment) and good engineering (define failure modes before you ship).
The connection to the previous lesson — the regret minimization framework — is direct. Regret minimization asks: "Which choice will I least regret in five years?" Kill criteria ask: "What evidence would make continuing the most regrettable choice in five years?" They are two faces of the same discipline: making future-oriented decisions using your current rational capacity rather than your future emotional state.
The connection to the next lesson — decision speed as a variable — is also direct. Kill criteria accelerate abandonment decisions. Without them, you must build the case for quitting in real time, under emotional load, against the full weight of sunk cost psychology. With kill criteria, the case was built before the clock started. When they fire, you don't deliberate. You execute.
The protocol
- Before any significant commitment, run a pre-mortem. Imagine complete failure. Generate specific scenarios.
- Convert each scenario into a measurable criterion. Name the metric, the threshold, and the evaluation date.
- Write the criteria down and share them with at least one person who will hold you accountable.
- Schedule the evaluation. Put it on a calendar. When the date arrives, evaluate against the criteria — not against your current emotional state.
- If the criteria fire, execute. Stop. Redirect resources. Grieve the loss if you need to. But do not renegotiate the criteria unless you can convince a disinterested party that genuinely new evidence — not new hope — justifies an override.
The question is not whether you're tough enough to keep going. The question is whether you're disciplined enough to define, in writing, the conditions under which you'll stop — and then actually stop when they're met.
Most people aren't. That's why most failing projects don't end with a decision. They end with exhaustion.
Kill criteria let you end with clarity instead.