The farmer who destroyed his own field
In 1767, the French economist Anne-Robert-Jacques Turgot described a thought experiment that would become one of the most enduring principles in economics. Imagine a field of fixed size. A single farmer working that field can produce a certain yield. Add a second farmer and the yield increases substantially — two people can divide labor, specialize, and cover more ground. Add a third, and the yield increases again, though perhaps not quite as dramatically. But keep adding farmers. By the time there are twenty people working the same plot, they are tripping over each other, duplicating effort, and competing for the same rows. Each additional farmer produces less additional grain than the one before. Eventually, adding another farmer produces no additional grain at all — and beyond that point, the field may actually produce less.
Turgot had identified the law of diminishing returns: when you increase one input while holding everything else constant, each additional unit of that input eventually produces less additional output than the previous unit. David Ricardo, Thomas Malthus, and Edward West formalized this into a cornerstone of classical economics in 1815, applying it to land rent and agricultural production. But the principle extends far beyond farming. It applies to every system where you are pushing a single lever to improve a fixed process.
It applies, in particular, to optimization itself.
The curve always bends
The previous lesson established that small improvements compound. A 1% gain each week becomes transformative over a year. That is true — but it tells only half the story. The other half is that the ease of finding each successive 1% gain decreases over time.
Here is why. When you begin optimizing any system, the low-hanging fruit is abundant. The first pass through a process reveals obvious waste, clear bottlenecks, and simple fixes that produce dramatic improvements. The second pass finds subtler inefficiencies. The third pass requires careful measurement just to identify what to change. By the fifth or sixth pass, you are making microscopic adjustments to a system that is already performing well — and each adjustment requires more expertise, more time, and more risk of breaking something that works.
This is not a failure of the optimizer. It is a mathematical inevitability. The improvement curve for any bounded system follows a pattern that economists call concave returns: steep at first, then gradually flattening, asymptotically approaching a ceiling it can never reach. The first 20% of effort captures roughly 80% of available gains. The remaining 80% of effort fights over the last 20%.
Vilfredo Pareto first documented this pattern empirically in 1896, observing that 80% of Italy's land was owned by 20% of the population. The distribution he identified — later formalized as the Pareto Principle — appears with striking consistency across domains. Microsoft discovered that fixing the top 20% of reported bugs eliminated 80% of system crashes. Sales organizations find that 20% of clients generate 80% of revenue. And optimizers find, repeatedly, that 20% of their effort produces 80% of their improvement.
The implication is not that the remaining 20% of improvement is worthless. It is that the cost of obtaining it is dramatically higher per unit of gain. A developer who spends two hours refactoring a critical hot path may cut response time by 40%. The next two hours might yield 5%. The next two hours after that, 0.3%. The function being optimized has not changed — but each unit of input now buys radically less output.
Herbert Simon and the economics of "good enough"
In 1947, the economist and cognitive scientist Herbert Simon asked a question that challenged the foundations of rational choice theory: if optimization always has diminishing returns, why do economists assume that rational actors always maximize?
Simon's answer was that they don't. Real humans — with limited time, limited information, and limited cognitive capacity — do not search for the best possible option. They search until they find an option that meets a threshold of acceptability, and then they stop. Simon coined the term "satisficing" — a portmanteau of "satisfy" and "suffice" — to describe this strategy.
The insight was profound: satisficing is not laziness or irrationality. It is an efficient response to the economics of search. When the cost of continuing to search for a better option exceeds the expected value of finding one, the rational move is to stop. A study of 628 used car dealers confirmed this empirically — 97% relied on a form of satisficing, setting initial prices in the middle of comparable ranges rather than running exhaustive market analyses for each vehicle.
Simon received the Nobel Prize in Economics in 1978 for this work on bounded rationality and decision-making. His core insight maps directly onto optimization: there is always a point where the marginal cost of further improvement exceeds the marginal benefit. The person who recognizes that point and stops is not settling. They are optimizing their optimization — allocating their finite resources to where those resources produce the most value.
The perfectionist, by contrast, is someone who cannot recognize that point. They continue optimizing past the inflection, spending ten units of effort to capture one unit of improvement, because they are optimizing for the metric rather than for the value the metric was meant to represent.
Over-optimization makes things worse
Diminishing returns describe a curve that flattens. But there is something worse than a flat curve: a curve that turns downward. In many domains, optimization pushed past its useful range does not merely stop helping — it actively causes harm.
Machine learning provides the clearest illustration. When you train a model on data, the model's performance on that training data improves with each iteration. But at some point, the model begins "memorizing" the specific noise in the training set rather than learning the underlying patterns. This is called overfitting. The model's performance on its training data continues to improve, but its performance on new, unseen data — the only performance that matters — degrades. Research published in the Journal of Machine Learning Research found that the degradation from overfitting the model selection criterion can be "surprisingly large, and the effects are often of comparable magnitude to differences in performance between learning algorithms." The optimizer has not just stopped gaining — they have made their system objectively worse by optimizing too aggressively.
This pattern appears everywhere once you know to look for it. Charles Goodhart, a British economist, formalized it in 1975: "When a measure becomes a target, it ceases to be a good measure." Goodhart's Law describes what happens when optimization becomes divorced from purpose. The British colonial government in India offered bounties for dead cobras to reduce the cobra population. Citizens responded rationally to the incentive: they bred cobras, killed them, and collected the bounty. When the government discovered the fraud and cancelled the program, the breeders released their now-worthless snakes. The cobra population ended up larger than before the optimization began.
In software development, teams that optimize for story points completed per sprint learn to inflate estimates. Call centers that optimize for calls answered per hour train operators to end calls prematurely, destroying customer satisfaction. Hospitals that optimize for reduced length of stay discharge patients too early, increasing emergency readmissions. In each case, the organization optimized itself past the point of diminishing returns and into the territory of active harm — not because the people involved were incompetent, but because the optimization itself was never designed to stop.
The psychology of not stopping
If diminishing returns are mathematically inevitable, and if over-optimization can make things worse, why do people keep optimizing past the point of usefulness?
The answer is partly psychological. Perfectionism research reveals that the drive to keep polishing is often not about quality at all — it is about anxiety. Brene Brown's research on vulnerability identifies perfectionism as a defense mechanism: "the belief that if we look perfect, act perfect, and deliver perfect results, we'll avoid criticism or rejection." The perfectionist is not optimizing their work. They are optimizing their shield against judgment. And because no shield is ever perfect, the optimization never terminates.
Neuroscience research supports this: perfectionism shifts the brain from creative and adaptive thinking into threat-monitoring mode, activating stress responses that impair working memory, problem-solving, and decision speed. The person deep in diminishing returns is not just producing less improvement per unit of effort — they are actually degrading their own cognitive performance in the process. They are simultaneously getting less out and putting more in.
Anders Ericsson's research on expert performance reveals a related trap. Most people practicing a skill reach a "satisfactory level that is stable and autonomous" — a plateau where performance is generated automatically with minimal effort. The plateau feels like a ceiling, and many people interpret it as the limit of their ability. Ericsson showed that deliberate practice — focused, effortful, targeted work on specific weaknesses — can push past plateaus. But here is the critical nuance: even deliberate practice follows diminishing returns. A concert pianist's 10,000th hour of practice does not produce the same magnitude of improvement as their 100th hour. The returns are real but smaller, and the cost in effort and time remains constant or increases.
The question is never whether more improvement is possible. More improvement is almost always possible. The question is whether the cost of that improvement is justified by its value. And answering that question requires stepping outside the optimization loop to evaluate it — something the deeply engaged optimizer often cannot do.
Reading the curve in your own systems
Recognizing diminishing returns in abstract examples is easy. Recognizing them in your own work — in real time, while you are emotionally invested in the outcome — is one of the hardest epistemic skills you can develop.
Here are the signals that you have entered the zone of diminishing returns:
The improvements become invisible to anyone but you. When your last three rounds of editing changed nothing that a reader would notice, the curve has flattened. When your last performance tweak shaved off milliseconds that no user can perceive, the curve has flattened. The gains are real in the technical sense. They are meaningless in the practical sense.
The cost of measurement exceeds the cost of the problem. When you spend more time measuring whether an improvement worked than you spent implementing it, you have likely passed the inflection point. Measurement is valuable — but when measurement itself becomes the bottleneck, it is a signal that the thing being measured is too small to matter.
You are optimizing the metric, not the outcome. This is Goodhart's Law in practice. When you catch yourself improving a number for the satisfaction of seeing the number improve, rather than because the underlying system needs to be better, you have crossed from optimization into compulsion.
Each iteration takes longer than the last. In the early stages of optimization, improvements come quickly. In the late stages, each round requires more analysis, more deliberation, more testing, and more recovery from unintended side effects. If your second round of optimization took twice as long as your first, and your third is taking twice as long as your second, you are on an exponential cost curve chasing a logarithmic return curve.
You cannot articulate what "done" looks like. The clearest sign of runaway optimization is the absence of a stopping criterion. If you cannot state, in advance, the specific threshold at which you will stop optimizing and move on, you will not stop. The optimization will continue until you run out of time, energy, or patience — not because you reached an intentional endpoint.
The stopping-rule discipline
The antidote to diminishing returns is not to avoid optimization. Optimization is how agents improve, and the previous lesson showed that small improvements compound into transformative change. The antidote is to build explicit stopping rules into every optimization effort before it begins.
A stopping rule is a pre-commitment: "I will optimize this until X, and then I will stop and redirect my effort." The value of the stopping rule is that it is set before you are emotionally invested in the outcome — before the sunk cost fallacy, the perfectionist anxiety, and the optimizer's momentum make it psychologically difficult to stop.
Effective stopping rules take several forms. Threshold-based: "I will optimize until response time is under 200ms." Time-boxed: "I will spend two hours on this, then ship whatever I have." Iteration-limited: "I will do three rounds of revision, then publish." Ratio-based: "I will stop when my last improvement took more than 3x the effort of the improvement before it."
The specific rule matters less than the discipline of having one. The person who enters an optimization effort with a stopping rule makes better decisions about when to continue and when to redirect, because they have a reference point that exists outside the gravitational pull of the work itself.
From compounding to diminishing — and what comes next
The previous lesson taught you that small improvements compound. This lesson teaches the counterbalance: that each successive improvement is harder to find and smaller when found. Both are true simultaneously. Compounding operates over time — the accumulated weight of many improvements creates transformative change. Diminishing returns operate within a single optimization cycle — each round produces less than the last.
The master optimizer holds both truths at once. They pursue improvement relentlessly, but they switch targets when the current curve flattens. Instead of pushing the same lever past its useful range, they identify the next lever — the next bottleneck, the next system, the next domain where early-stage improvements are still available. They compound across systems rather than over-optimizing within one.
This is the bridge to the next lesson: knowing when to stop optimizing. Understanding diminishing returns intellectually is necessary but not sufficient. The next step is developing the practical judgment to recognize the inflection point in real time and the discipline to redirect effort when you reach it — because "good enough" is not a failure of ambition. It is the recognition that your ambition is better served elsewhere.