You already know the best answer. You just won't stop looking.
You have spent hours comparing nearly identical options. You have read every review, opened seventeen browser tabs, asked three friends, and still felt uncertain. You picked something, and within a day you were already wondering whether the other option would have been better.
This is not thoroughness. This is a failure mode with a name, a research base, and a fix.
Herbert Simon identified the core problem in 1956: human beings do not have the cognitive capacity, the time, or the information to find the optimal solution to most problems. Classical economics assumed they did — that rational agents evaluate every alternative, compute expected utilities, and select the maximum. Simon called this assumption fiction. He proposed that real humans use a different strategy: they search until they find an option that meets their minimum acceptable threshold, and then they stop. He coined a word for this by combining "satisfy" and "suffice": satisficing.
The opposite strategy — exhaustively searching for the best possible option — is maximizing. And for most decisions you face, maximizing costs more than it returns.
The architecture of bounded rationality
Simon's insight was not that people are lazy or irrational. It was that rationality operates within constraints, and those constraints make optimization impossible for most real-world decisions. He called this bounded rationality and it earned him the Nobel Prize in Economics in 1978.
The bounds are structural, not motivational:
Cognitive limits. Your working memory holds roughly 3 to 5 items at a time (Cowan, 2001). You cannot simultaneously hold, compare, and rank twelve apartment listings across nine dimensions. You think you can. You are wrong. What actually happens is that you compare a few salient features, get overwhelmed, and default to whichever option triggers the strongest emotional response — which is exactly the decision process you were trying to avoid by "being thorough."
Information limits. You never have complete information about your alternatives. You do not know what the job is actually like until you work there. You do not know whether the apartment has noisy neighbors until you move in. The information you gather during an extended search has diminishing marginal value — the tenth review tells you less than the second one did.
Time limits. Every hour spent evaluating options is an hour not spent executing on a chosen option. The search itself has a cost, and that cost is invisible to maximizers because they account for the quality of the decision but not the price of the search.
Simon's framework redefines what "rational" means. A rational decision is not one that finds the global optimum. A rational decision is one where the search cost is proportional to the improvement it yields. When the cost of further search exceeds the expected improvement in outcome, the rational move is to stop — even if a better option theoretically exists.
The maximizer's paradox: doing better, feeling worse
Barry Schwartz and his colleagues turned Simon's theoretical framework into empirical psychology. In a landmark 2002 study published in the Journal of Personality and Social Psychology, Schwartz, Ward, Monterosso, Lyubomirsky, White, and Lehman developed the Maximization Scale and administered it to over 1,700 participants across seven samples.
The findings were consistent and stark. Compared to satisficers, maximizers reported:
- Lower happiness, optimism, and life satisfaction
- Higher depression and regret
- More social comparison (constantly measuring their choices against others')
- Less satisfaction with consumer decisions — even when they made objectively better ones
The most striking evidence came from a study on job-seeking college graduates by Iyengar, Wells, and Schwartz (2006). Maximizers secured starting salaries that were 20% higher on average than satisficers' salaries. By any objective measure, they made better decisions. But they reported lower satisfaction with their job search, lower satisfaction with the jobs they accepted, and more negative emotion throughout the entire process.
This is the maximizer's paradox: the strategy that produces better outcomes on paper produces worse outcomes in experience. The mechanism is straightforward. Maximizers do not compare their choice to their threshold. They compare it to every option they rejected, every option they imagined, and every option someone else chose. The comparison set is infinite, so satisfaction is impossible.
Schwartz put it bluntly: "The secret to happiness is low expectations." That sounds cynical until you realize what it actually means — define what good enough looks like before you search, and stop when you find it. That is not settling. That is engineering your decision process to produce both good outcomes and the ability to enjoy them.
Choice overload: when more options make you worse off
The cost of maximizing scales with the number of available options. This is not a theoretical prediction — it has been demonstrated in controlled experiments.
In 2000, Sheena Iyengar and Mark Lepper set up jam-tasting displays in a grocery store. One display offered 6 varieties. The other offered 24. The larger display attracted more browsers — 60% of passersby stopped, versus 40% at the smaller display. But when it came to purchasing, the pattern reversed: 30% of people who stopped at the 6-option display bought jam, versus only 3% at the 24-option display. More options produced ten times less action.
The same pattern has been replicated across domains — retirement plan enrollment, chocolate selection, essay assignments. When you face too many options, three things happen: you defer the decision (analysis paralysis), you make a worse decision because cognitive overload degrades your comparison quality, or you make a fine decision but experience less satisfaction because the unchosen alternatives haunt you.
This is why the satisficer's strategy is not a compromise — it is a structural advantage. By defining a threshold and stopping at the first option that clears it, you avoid choice overload entirely. You never enter the zone where the number of options degrades your decision quality and your emotional state simultaneously.
The mathematics of knowing when to stop
If satisficing sounds imprecise, mathematics provides a rigorous version: the optimal stopping problem, also known as the secretary problem.
The setup: you are hiring from a pool of n candidates. You interview them one at a time, in random order. After each interview, you must hire or reject — no callbacks. You want to hire the best candidate. What strategy maximizes your probability of selecting the single best option?
The answer, proved by Lindley in 1961 and refined by others, is the 37% rule: reject the first n/e candidates (approximately 37% of the total pool), then hire the next candidate who is better than every candidate you have seen so far. This strategy selects the single best candidate about 37% of the time — regardless of whether there are 10 or 10 million candidates.
The 37% rule is not a heuristic. It is the mathematically optimal solution to a well-defined class of decision problems. And its core logic maps directly onto satisficing: use the exploration phase to calibrate your threshold (what does "good" look like?), then commit to the first option that exceeds it.
Brian Christian and Tom Griffiths, in Algorithms to Live By (2016), extended this framework to practical decisions: apartment hunting, dating, hiring, parking. Their conclusion: in most real-world scenarios, the optimal amount of exploration is far less than people think. You need enough data to set a calibrated threshold — and then you need to stop exploring and start committing.
Fast and frugal: when less information produces better decisions
Gerd Gigerenzer's research program at the Max Planck Institute pushed Simon's insight even further. His work on fast-and-frugal heuristics demonstrated that in many real-world environments, simple decision rules do not just save time — they actually outperform complex optimization.
Gigerenzer's key concept is ecological rationality: a heuristic is rational not because it follows the rules of logic or probability, but because it is adapted to the structure of the environment where it is deployed. In environments with high uncertainty, small sample sizes, and noisy data — which describes most of the decisions you face — simple rules that ignore most of the available information often produce more accurate predictions than sophisticated models that try to use all of it.
This is the "less is more" effect: under certain conditions, processing less information yields better decisions. The mechanism is overfitting — complex models fit the noise in past data rather than the signal, and then fail on new cases. A satisficing heuristic that uses only the three most important criteria is less likely to overfit than a maximizing strategy that weighs fifteen factors.
This means satisficing is not just more efficient than maximizing. In many environments, it is more accurate.
The AI parallel: satisficing at machine scale
If you work with AI systems, you have already encountered satisficing under different names.
Early stopping in training. When training a neural network, you monitor performance on a validation set. At some point, training accuracy keeps improving but validation accuracy plateaus or declines — the model is memorizing training data rather than learning generalizable patterns. Early stopping halts training at the "good enough" point where the model performs well on new data. Continuing to optimize past this point makes the model objectively worse. This is satisficing formalized as a training protocol.
Compute-quality tradeoffs in inference. Modern AI systems face explicit tradeoffs between compute cost and output quality. Epoch AI's research on training-inference compute allocation shows that you can spend approximately one order of magnitude more on training compute or one order of magnitude more on inference without changing model performance — but at some point, additional compute yields negligible improvement. Deciding where to stop is a satisficing decision at infrastructure scale.
Good-enough inference in production. When you deploy an AI model to serve millions of requests, you do not use the largest possible model for every query. You use the smallest model that clears a quality threshold for the task. A simple question gets a fast, small model. A complex question gets routed to a larger one. This is satisficing as a system architecture — matching decision weight to decision importance, which is exactly what this lesson teaches you to do with your own cognitive resources.
The overfitting parallel. Gigerenzer's "less is more" finding maps directly to the bias-variance tradeoff in machine learning. A model that maximizes fit to training data (low bias, high variance) performs poorly on new data. A model that satisfices — accepting some training error in exchange for generalization — performs better in deployment. Your decision process works the same way. Optimizing for the specific details of the options in front of you right now (maximizing) makes you more vulnerable to factors you could not have predicted. Setting a robust threshold and committing (satisficing) makes your decision more resilient to uncertainty.
When to satisfice and when to maximize
This lesson is not arguing that you should never compare options carefully. It is arguing that most people maximize far too often, on decisions that do not warrant it, at costs they do not track.
Here is a practical framework for calibrating your strategy:
Satisfice when: the decision is reversible, the options are roughly comparable, the search cost is high relative to the difference between options, or you are choosing among options above your quality threshold. This covers the vast majority of your daily decisions — tools, restaurants, purchases, scheduling, routing.
Invest more search when: the decision is irreversible, the variance between options is high, the search cost is low relative to the stakes, and you have a reliable way to evaluate quality. Even here, apply the 37% rule: explore for a fixed budget, calibrate your threshold, then commit to the next option that exceeds it.
Never do: open-ended maximizing with no stopping rule. If you have not defined what "good enough" looks like before you start searching, you will never stop — because every new option resets your reference point, and there is always another option.
The decision that costs the most is the one you keep remaking
The deepest cost of maximizing is not the time spent searching. It is the time spent re-evaluating after you have already chosen. Maximizers do not just search longer — they revisit their decisions, compare them to alternatives that no longer exist, and experience regret that compounds over time.
Satisficing eliminates this entirely. When your criterion is "meets my threshold," and the option met your threshold, the decision is closed. You do not need to monitor whether a better option appeared later, because your satisfaction does not depend on having chosen the best — it depends on having chosen well enough.
Simon understood this at the deepest level. Bounded rationality is not a concession to human weakness. It is a recognition that in a world of infinite options and finite cognitive resources, the ability to stop searching is not a limitation — it is the skill that makes action possible.
Define your threshold. Search until something clears it. Choose. Move on. The search cost you save is not wasted — it is reinvested in execution, which is where results actually come from.
Sources
- Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129-138.
- Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology, 83(5), 1178-1197.
- Iyengar, S. S., Wells, R. E., & Schwartz, B. (2006). Doing better but feeling worse: Looking for the "best" job undermines satisfaction. Psychological Science, 17(2), 143-150.
- Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995-1006.
- Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103(4), 650-669.
- Christian, B., & Griffiths, T. (2016). Algorithms to Live By: The Computer Science of Human Decisions. Henry Holt and Company.
- Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87-114.