You have been optimizing the wrong direction
In L-0574, you learned about integration optimization — improving how an agent connects and coordinates with other systems. That lesson assumed your agent's steps were necessary and focused on making the connections between them more efficient. But there is a prior question that most people never ask: should all of those steps exist in the first place?
The default human response to a slow or inefficient process is to make each step faster. We add tools, automate transitions, parallelize tasks, and invest effort in shaving seconds off every component. This is additive optimization — solving problems by adding improvements on top of existing structures. It is the obvious move. It is often the wrong move.
The less obvious and frequently more powerful move is subtractive optimization: making a process better by removing parts of it. Not replacing steps with faster steps. Removing steps entirely. The fastest step is the one that does not exist. The cheapest operation is the one you never perform. The most reliable component is the one that was never built.
This lesson is about the discipline of asking, before you improve anything, whether that thing should exist at all.
The addition bias: why your brain defaults to more
In 2021, Gabrielle Adams, Benjamin Converse, Andrew Hales, and Leidy Klotz published a landmark study in Nature demonstrating that humans systematically overlook subtractive solutions. Across eight experiments, participants consistently defaulted to adding elements when asked to improve or change something — even when removing elements was simpler, cheaper, and more effective.
In one experiment, participants were given a Lego structure and asked to modify it so that it could support a brick placed on top. The optimal solution was to remove a single block, creating a stable, flat surface. Yet the majority of participants added blocks instead, building elaborate supports on top of an already unstable structure. The subtractive solution was faster, required no additional resources, and produced a more stable result. Most people never considered it.
The researchers found three conditions that made the addition bias worse. First, when the task did not explicitly cue participants to consider subtraction, subtractive solutions dropped significantly — from 61 percent to 41 percent in the cued versus uncued conditions. Second, when participants had only one opportunity to solve the problem rather than several, they were even less likely to discover the subtractive path. Third, when cognitive load was higher — when participants were mentally busy — the bias toward addition intensified.
This is not a quirk of laboratory puzzles. It is a systematic feature of human cognition. When you look at a slow process, your brain generates additive solutions: add a tool, add a step, add a check, add a meeting, add a notification, add a dashboard. The subtractive solution — remove a step, eliminate a meeting, delete a notification, abandon a check that catches nothing — rarely surfaces spontaneously. You must train yourself to search for it.
Klotz, in his 2021 book Subtract, argues that this bias explains phenomena far beyond individual problem-solving. Institutional bloat, regulatory accumulation, feature creep in software, credential inflation in hiring, and the relentless expansion of organizational processes all reflect the same underlying pattern: people add by default and subtract only when forced. The organizations, processes, and agents that achieve enduring efficiency are the ones that have built subtraction into their operating logic — that ask, as a matter of routine, what can we stop doing.
Occam's razor: the oldest argument for less
The intellectual case for subtraction predates modern psychology by seven centuries. William of Ockham, a fourteenth-century Franciscan friar and logician, articulated a principle that became one of the foundational heuristics of Western thought: entities should not be multiplied beyond necessity. If two explanations account for the same evidence, prefer the one with fewer assumptions. If two designs achieve the same function, prefer the one with fewer components.
Occam's razor is not a claim that simpler explanations are always correct. It is a claim about where the burden of proof lies. Every additional element in a system — every step in a process, every assumption in a theory, every feature in a product — must justify its existence. The default state is absence. Presence requires a reason.
The probabilistic justification is straightforward. By the conjunction rule of probability theory, any conjunction of claims (A and B and C) is necessarily less probable than any individual claim within it. Every element you add to a system introduces a new potential failure point, a new maintenance burden, a new interaction effect with every other element. The system does not just grow linearly with each addition. It grows combinatorially, because each new element can interact with every existing element. The complexity cost of addition is always higher than it appears, and the simplicity benefit of subtraction is always greater than it appears.
For your cognitive agents and personal processes, Occam's razor translates to a concrete practice: before you add anything, verify that every existing element has earned its place. Before you improve a step, confirm that the step needs to exist.
Via negativa: Taleb's case for subtractive knowledge
Nassim Nicholas Taleb formalized the subtraction principle for decision-making under uncertainty in Antifragile (2012), drawing on the theological concept of via negativa — the approach of defining something by what it is not, rather than by what it is.
Taleb's argument is that subtractive knowledge is more robust than additive knowledge. We can be far more confident about what harms us than about what helps us. We know with high certainty that smoking causes cancer. We know with much less certainty which supplement improves longevity. We know with high certainty that unnecessary complexity creates fragility. We know with much less certainty which new feature will produce value.
This asymmetry has a direct implication for optimization: removing known sources of harm, waste, and fragility is a more reliable strategy than adding speculative sources of improvement. Taleb calls this "subtractive medicine" when applied to health — stop doing harmful things before you start doing beneficial things — but the principle extends to any domain.
For agents and processes, via negativa means: before you add a new tool to your workflow, remove the tools that are not producing value. Before you add a new step to your morning routine, remove the steps that add friction without adding function. Before you add a new meeting to the calendar, remove the meetings that produce no decisions and no information. Subtraction operates on known inefficiencies. Addition operates on hoped-for improvements. The first is a bet on evidence. The second is a bet on prediction. In a complex, uncertain world, evidence beats prediction.
Taleb's framework also reveals why subtraction feels wrong. Adding something feels like progress. Removing something feels like loss. The asymmetry is psychological, not logical. A process with ten steps feels more thorough than a process with seven steps, even when the seven-step version produces identical outcomes. We confuse effort with value, complexity with rigor, and activity with progress. Via negativa is the discipline of resisting that confusion.
Lean manufacturing: waste elimination as a system
The most comprehensive operational framework for subtraction is the Toyota Production System, developed by Taiichi Ohno at Toyota beginning in the 1950s. Ohno did not frame his work as optimization. He framed it as waste elimination — muda — and he identified seven categories of waste that could be systematically removed from any production process.
The seven wastes are: transportation (unnecessary movement of materials or information), inventory (excess stock not aligned with demand), motion (inefficient human or machine movement), waiting (delays in workflow), overproduction (producing more than needed or earlier than needed), over-processing (adding more work or features than required), and defects (errors requiring rework).
What makes Ohno's framework powerful is that it redefines what optimization means. In the traditional view, optimization means making processes faster or outputs better. In the lean view, optimization means eliminating activities that consume resources without creating value. The unit of analysis is not the step — it is the waste embedded in the step. A step that adds value stays. A step that adds no value goes. A step that adds some value but also contains waste gets redesigned to preserve the value and remove the waste.
Over-processing is the waste category most relevant to cognitive and agentic work. It means doing more work than the outcome requires. Formatting a document that no one reads. Running an analysis to three decimal places when one decimal place drives the same decision. Reviewing a checklist item that has never failed. Writing a report section that no stakeholder uses. Each of these activities feels productive — you are doing something — but the something adds cost without adding value. Ohno's contribution was to make that distinction visible and systematic.
The lean insight, applied to your own processes: audit every step for the question not "how can this be done better?" but "does this need to be done at all?" The first question assumes the step is necessary and seeks improvement. The second question challenges the assumption of necessity itself.
Design as subtraction: Rams and Saint-Exupery
The subtraction principle appears with equal force in design, where it has been articulated by two figures whose formulations have become canonical.
Antoine de Saint-Exupery, the French aviator and author, wrote in Terre des Hommes (1939): "Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away, when a body has been stripped down to its nakedness." He was writing about the evolution of aircraft design — how early planes were festooned with struts, wires, and structural redundancies that were gradually stripped away as engineers learned which elements were load-bearing and which were merely inherited from earlier, less understood designs. The mature aircraft was not the one with the most features. It was the one from which nothing further could be removed without loss of function.
Dieter Rams, the industrial designer who led Braun's design department from 1961 to 1995, condensed the same principle into three words: "Less, but better." His ten principles of good design, formulated in the 1970s and 1980s, culminate in the tenth: "Good design is as little design as possible." Rams did not mean that design should be minimal for aesthetic reasons. He meant that every element in a product should earn its presence through function. A button that serves no purpose, a surface that communicates nothing, a feature that adds complexity without adding capability — these are not neutral. They are actively harmful, because they consume the user's attention, increase manufacturing cost, and create maintenance burden without returning value.
Rams' designs at Braun — the SK4 record player, the T3 pocket radio, the 606 Universal Shelving System — became enduring precisely because they contained nothing unnecessary. Jonathan Ive, who led Apple's product design for two decades, cited Rams as his primary influence. The iPhone's design philosophy — remove buttons, remove ports, remove visible screws, remove every element that does not justify its presence — is Rams' subtraction principle applied to consumer electronics at global scale.
For your agents and processes, the design principle is the same: every element must justify its presence. Not "is this element harmful?" — a bar that almost everything clears — but "does this element produce value that exceeds its cost in complexity, maintenance, and attention?" That is a bar that many steps, features, habits, and procedures cannot clear.
Pruning in neural networks: subtraction as machine intelligence
The subtraction principle has a precise technical implementation in machine learning: neural network pruning. A neural network is trained with millions or billions of parameters — weighted connections between nodes. After training, many of these parameters contribute negligibly to the network's performance. Pruning removes them.
The results are consistently striking. Research demonstrates that neural networks can be pruned by 50 to 90 percent of their parameters with minimal loss in accuracy — and sometimes with an actual improvement in performance. The pruned network is faster (fewer computations per inference), smaller (less memory), cheaper to run (less hardware), and in some cases more accurate (reduced overfitting from parameter redundancy).
The lottery ticket hypothesis, proposed by Jonathan Frankle and Michael Carlin in 2019, makes an even stronger claim: within a large, randomly initialized network, there exist small subnetworks that, if trained in isolation, would achieve performance comparable to the full network. The large network contains the solution — and a vast amount of unnecessary structure surrounding it. Pruning finds the solution by removing the surrounding noise.
This is a direct analogy to process optimization. Your current workflow, like a large neural network, was built by accumulation — adding steps, tools, checks, and habits over time in response to various needs, some of which no longer exist. Within that accumulated structure, there is a leaner process that produces the same outcomes with fewer steps. The discipline is to find that leaner process not by redesigning from scratch, but by systematically removing elements and measuring whether the output changes.
The pruning analogy also reveals the correct method for subtraction: iterative removal with verification. You do not remove everything at once. You remove one element, measure the impact, and proceed based on evidence. If the output degrades, you restore the element — it was load-bearing. If the output holds, the element was dead weight. This iterative approach is more reliable than either keeping everything (which guarantees accumulated waste) or removing everything and rebuilding (which risks discarding hidden value).
The cognitive case: fewer steps mean fewer failure points
The case for removing unnecessary steps extends beyond efficiency into reliability. Every step in a process is a potential failure point. A step can be executed incorrectly. A step can be skipped accidentally. A step can interact badly with an adjacent step under conditions you did not anticipate. A step can consume attention that was needed elsewhere.
The reliability of a sequential process is the product of the reliabilities of its individual steps. If each step has a 95 percent success rate, a ten-step process has a reliability of 0.95 to the tenth power — about 60 percent. A seven-step process, removing three unnecessary steps, has a reliability of about 70 percent. You gained ten percentage points of reliability not by making any step more reliable, but by having fewer steps that could fail.
For cognitive agents — your personal processes, your decision protocols, your daily routines — this reliability argument is especially potent. Every step in a cognitive process consumes working memory, requires a context switch, and creates an opportunity for distraction or error. A morning routine with fourteen steps is not just slow. It is fragile. Each step is a point where your attention can be captured by something else, where your energy can be diverted, where the process can stall. Removing steps does not just save time. It saves cognitive resources and reduces the probability that the process breaks down.
This is why the most sustainable personal systems are often the simplest. Not because their users lack sophistication, but because their designers understood that every additional step is a cost — in time, in attention, in reliability, and in the willpower required to execute the step on days when motivation is low.
Applying subtraction to your agents
Here is a concrete protocol for subtractive optimization, adapted from the principles above.
Step 1: Inventory every step. Choose a process — a workflow, a routine, a decision protocol, an agent pipeline. List every step, including the ones that seem trivial or obvious. The trivial steps are often the ones most ripe for removal, because no one has questioned them.
Step 2: Classify each step. For each step, assign it to one of three categories. Value-creating: this step directly produces an output that the process needs. Value-enabling: this step does not produce output itself but is necessary for a value-creating step to function (setup, initialization, handoff). Non-value-adding: this step produces no output and enables no other step — it exists because of habit, legacy, or unexamined assumption.
Step 3: Challenge the enabling steps. Value-enabling steps are where hidden waste accumulates. A "necessary" data-formatting step might exist only because two tools do not share a common format — a problem that could be solved once rather than repeated at every execution. A "required" approval step might exist because of a risk that no longer applies. Ask of each enabling step: is this step compensating for a problem that could be solved at its source?
Step 4: Remove and measure. Take the non-value-adding steps and the suspect enabling steps. Remove them — not permanently, but experimentally. Run the process without them for a defined period. Measure whether the output quality changes. If it does not, the steps were waste. Formalize their removal. If quality degrades, restore the step and reclassify it as value-enabling or value-creating.
Step 5: Repeat on a cycle. Waste accumulates continuously. Steps that were value-creating when added may become non-value-adding as conditions change. A quarterly subtraction review — a deliberate pass through your key processes asking "what can I stop doing?" — prevents the gradual accumulation of unnecessary complexity that Klotz's research shows humans are biased to produce.
The hardest subtraction: removing what you built
The deepest obstacle to subtraction is not cognitive. It is emotional. The steps in your process are there because you put them there. Each one represents a past decision, a past insight, a past investment of effort. Removing a step feels like admitting the decision was wrong. It feels like wasting the effort you invested in building and refining that step. It feels like loss.
This is the sunk cost fallacy applied to process design. The effort you invested in building a step is gone regardless of whether the step remains. The only relevant question is whether the step produces value now and going forward. If it does not, keeping it does not honor your past investment. It merely ensures that the cost of that investment continues to compound — in execution time, in cognitive load, in maintenance burden, in failure probability — with every iteration.
The discipline of subtraction requires a specific emotional skill: the willingness to look at something you built, recognize that it no longer serves its purpose, and remove it without interpreting the removal as failure. This is not easy. It is necessary. The agents and processes that remain effective over time are not the ones that only grow. They are the ones that periodically shed what is no longer needed, making room for what is.
From subtraction to structured optimization
You now have the most powerful single technique in the optimization toolkit: the discipline of removing what does not earn its place. Before you make anything faster, ask whether it should exist. Before you improve a step, ask whether the step produces value. Before you add a tool, a check, a meeting, or a feature, ask what you can remove instead.
In L-0576, you will learn how to structure this technique — along with measurement, iteration, and other optimization methods — into focused optimization sprints: time-boxed periods dedicated to systematically improving a specific agent or process. Subtraction will be one of your primary instruments. But the sprint framework will give you the structure to apply it consistently rather than sporadically, and to compound the gains across multiple optimization cycles.
Sources:
- Adams, G. S., Converse, B. A., Hales, A. H., & Klotz, L. E. (2021). "People systematically overlook subtractive changes." Nature, 592(7853), 258-261.
- Klotz, L. (2021). Subtract: The Untapped Science of Less. Flatiron Books.
- Taleb, N. N. (2012). Antifragile: Things That Gain from Disorder. Random House. Chapters on via negativa and subtractive knowledge.
- Ohno, T. (1988). Toyota Production System: Beyond Large-Scale Production. Productivity Press. Original formulation of the seven wastes (muda).
- Rams, D. (1976). "Ten Principles for Good Design." Collected in Lovell, S. (2011). Dieter Rams: As Little Design as Possible. Phaidon.
- Saint-Exupery, A. de (1939). Terre des Hommes (Wind, Sand and Stars). Gallimard. Chapter 3.
- Frankle, J., & Carlin, M. (2019). "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks." ICLR 2019.