The illusion of novelty
You face decisions every day that feel new. A hiring choice you have not encountered before. A strategic pivot you have never considered. A conflict with a colleague that seems to require fresh thinking. The surface details are always different — different names, different numbers, different stakes — and this surface variation creates the persistent illusion that each decision requires original analysis.
It does not. Most decisions you face are variations of types you have already encountered. The hiring choice is a selection-under-uncertainty decision. The strategic pivot is a reversibility-versus-commitment decision. The interpersonal conflict is a short-term-discomfort-versus-long-term-relationship decision. You have made each of these types before, possibly hundreds of times, in different costumes.
The most consequential skill in your decision-making infrastructure is not the ability to reason through novel problems from first principles. It is the ability to recognize that most problems are not novel at all — that they belong to a type you have seen before, and that the type already has a known process for resolution. This is the difference between the expert and the novice. The expert is not smarter. The expert is faster at classification.
What the experts actually do: Klein's Recognition-Primed Decision model
In the late 1980s, research psychologist Gary Klein set out to study how people make decisions under extreme time pressure. The classical model of decision-making said that rational agents list their options, evaluate each against weighted criteria, and select the optimal one. Klein suspected this was not what actually happened in the field. He was right.
Klein, along with Roberta Calderwood and Anne Clinton-Cirocco, studied fireground commanders — people who make life-and-death decisions in minutes while buildings burn around them. What they found contradicted classical theory entirely. These experienced commanders did not generate multiple options and compare them. They generated one option — usually the first one — and it was usually good enough. They then mentally simulated that option to check for problems, and if it held up, they executed it immediately (Klein, 1998).
Klein formalized this as the Recognition-Primed Decision (RPD) model. The model describes three levels of operation. At the simplest level, the decision-maker recognizes the situation as a familiar type, and the type immediately suggests an appropriate course of action. At the second level, the decision-maker recognizes the type but needs to evaluate the suggested action through mental simulation before committing. At the third level, the situation is ambiguous, requiring the decision-maker to reassess what type of situation they are facing before a response becomes clear (Klein, 1998).
The critical insight is at the foundation: the first cognitive operation is not "what should I do?" It is "what type of situation is this?" The experienced commander looks at a burning building and does not see a unique event. They see a type — a basement fire with potential structural compromise, a ventilation-limited fire in a multi-story building, a chemical storage fire near residential areas. The type carries with it a package: typical causes, expected developments, relevant dangers, plausible goals, and standard responses. The commander does not derive this package from first principles. They retrieve it from experience.
Klein's subsequent research across emergency medical technicians, military officers, intensive care nurses, and nuclear plant operators confirmed the same pattern. Across domains, experienced decision-makers spend most of their cognitive effort on situation recognition — on classifying the type — rather than on option comparison. In approximately 80% of the decisions Klein studied, the first option generated by recognition was workable (Klein, 1998). The experts were not choosing between alternatives. They were matching situations to types, and the types provided the answers.
Pattern recognition is the mechanism of expertise
Klein's findings align with a deeper principle from cognitive science: expertise is, at its core, pattern recognition. The foundational research comes from chess.
In 1946, Adriaan de Groot studied chess players of different skill levels to understand what made masters better. He initially expected masters to search more deeply — to think more moves ahead than weaker players. Instead, he found that masters and strong amateurs searched to roughly similar depths. The difference was in perception. Masters looked at a board position and immediately saw meaningful structures — attacking configurations, defensive weaknesses, strategic patterns — where weaker players saw individual pieces (de Groot, 1965).
Herbert Simon and William Chase extended this work in 1973 with their chunking theory. They demonstrated that chess masters could reconstruct a game position almost perfectly after seeing it for only five seconds, while novices recalled only a few pieces. But when the pieces were arranged randomly — not drawn from real games — the masters' advantage vanished. The masters were not memorizing individual piece locations. They were recognizing familiar patterns, stored in long-term memory as chunks. Simon estimated that a master stores roughly 50,000 such chunks, comparable to the vocabulary of a college-educated adult (Chase & Simon, 1973).
A chunk is not just a stored image. It is a package of meaning. When a chess master recognizes a Sicilian Defense structure, the recognition activates associated knowledge: typical plans for both sides, common tactical motifs, key squares, and known pitfalls. The pattern carries the decision. The master does not derive the plan through exhaustive search. The recognized pattern suggests the plan, and the master then verifies it through calculation.
This is exactly what happens when you recognize a decision type in your own life. The recognition activates associated knowledge: what variables matter, what tradeoffs exist, what has worked before, what to watch out for. The type carries the framework. The only question is whether you have learned to see the types or whether you are still staring at the individual pieces.
A taxonomy of recurring decision types
If most decisions are variations of recurring types, what are those types? Different frameworks slice the space differently, and the right taxonomy depends on your context. But certain categories appear across domains with remarkable consistency.
Selection decisions involve choosing one option from a defined set. Hiring, purchasing, tool selection, which restaurant to eat at. The variables that matter: your criteria, the weight of each criterion, the reliability of your information about each option, and the cost of being wrong. Once you recognize a decision as a selection type, you can apply a consistent evaluation protocol instead of reasoning ad hoc each time.
Allocation decisions involve distributing a limited resource across competing demands. How to spend your time this week, how to distribute budget across projects, how to divide attention between priorities. The variables: total resource available, expected return per unit allocated, diminishing returns, and the cost of under-investing in any single demand. Allocation decisions recur constantly and most people re-derive the logic from scratch every time.
Timing decisions involve when to act. When to ship, when to hire, when to have a difficult conversation, when to sell an investment. The variables: information expected to arrive with delay, the cost of acting too early versus too late, and the decay rate of the opportunity.
Commitment decisions involve whether to enter a binding state. Signing a contract, entering a relationship, choosing a technology stack, making a public statement. Here Jeff Bezos's Type 1 / Type 2 framework is directly relevant. Type 1 decisions are one-way doors — consequential and irreversible. Type 2 decisions are two-way doors — reversible and low-cost to undo. Bezos argues that organizations consistently over-apply the heavy deliberation process appropriate for Type 1 decisions to Type 2 decisions, creating unnecessary slowness (Bezos, 2015). Recognizing which type you face lets you calibrate your rigor accordingly.
Continuation decisions involve whether to persist or stop. Should you keep investing in a project, continue a strategy, maintain a habit, stay in a role? These are among the most psychologically difficult because sunk cost bias and identity attachment corrupt the analysis. But as a type, they have a consistent structure: compare the forward-looking expected value of continuing versus the forward-looking expected value of the best alternative, ignoring what you have already invested.
Response decisions involve reacting to an event. Someone criticizes your work, a system fails, a competitor makes a move, an unexpected opportunity appears. These are Klein's primary domain — situations where the first cognitive task is classifying the event to determine which response category it falls into.
This is not an exhaustive taxonomy. Your own recurring decision types may include negotiation decisions, delegation decisions, escalation decisions, or types specific to your domain. The point is not to memorize a universal list. It is to build your own inventory of the types you actually encounter, so that each new instance triggers recognition rather than re-derivation.
The Cynefin framework: classifying by complexity, not content
Dave Snowden's Cynefin framework offers a complementary classification system that cuts across content categories. Instead of asking "what is this decision about?" Cynefin asks "what is the relationship between cause and effect in this situation?" The answer determines your entire decision process (Snowden & Boone, 2007).
In clear domains, the relationship between cause and effect is obvious. Best practices exist. The right process: sense the situation, categorize it, respond with the established practice. You do not deliberate. You classify and execute.
In complicated domains, cause and effect are discoverable through analysis. Expert knowledge is required. The right process: sense, analyze, respond. You need to think, but the answer is knowable.
In complex domains, cause and effect are only visible in retrospect. No amount of analysis can predict the outcome. The right process: probe, sense, respond. You run experiments and learn from what happens.
In chaotic domains, there is no discernible relationship between cause and effect. The right process: act, sense, respond. You stabilize first, then figure out what happened.
Cynefin is powerful because it prevents a common error: applying the wrong process to the wrong type of problem. Analyzing a complex problem as if it were merely complicated leads to false confidence in predictions that cannot be made. Probing a clear problem as if it were complex wastes resources on experiments whose answers are already known. The type of problem dictates the type of process, and misclassification is as costly as no classification at all.
The AI parallel: classification as infrastructure
If this pattern — classify the input, then route to the appropriate handler — sounds familiar, it should. It is the foundational architecture of virtually every production AI system.
When a large language model receives an input, the system does not treat every query as novel. Modern AI architectures route inputs through classification layers that determine what kind of request has arrived and which specialized processing pipeline should handle it. A customer service AI classifies the incoming message — billing question, technical issue, cancellation request, general inquiry — and routes it to a handler optimized for that type. The classification happens before the response generation, because the type determines the process (Anthropic, 2024).
This pattern extends to multi-agent AI systems, where a routing agent analyzes each input and dispatches it to the specialized agent best equipped to handle that category. The routing itself takes four distinct forms: rule-based routing that matches explicit patterns, machine-learning routing that uses trained classification models, embedding-based routing that measures semantic similarity, and LLM-based routing that interprets intent through language understanding. Each is a different mechanism for the same operation: recognize the type, select the handler.
The parallel to human expertise is structural, not metaphorical. A chess master's 50,000 chunks are a trained classification model. Klein's fireground commanders are routing incoming situations to specialized response handlers. Your own accumulated experience is a library of recognized patterns, each carrying an associated protocol. The question is whether you are using that library deliberately or leaving the classification to chance.
When you fail to classify, you do what a naive AI system does: treat every input as a novel generation problem, applying maximum compute to a task that could be resolved through pattern matching. You burn cognitive resources that should be conserved for genuinely novel situations — the ones that actually require first-principles reasoning.
Building your classification reflex
Recognizing decision types is a skill, not a personality trait. It can be trained. The method is simple, though it requires consistency.
Step one: log your decisions. For one week, write down every non-trivial decision you make. What was it about? What variables mattered? What made it hard? You are not trying to make better decisions yet. You are collecting data on what types of decisions you make.
Step two: cluster by structure. At the end of the week, look for structural similarities. Ignore surface content. Two decisions that feel completely different — "should I hire this contractor?" and "should I use this software tool?" — may be the same type: selection-under-uncertainty with high switching costs. Group by the skeleton, not the skin.
Step three: name the types. Give each cluster a short label that captures its structural essence. "Resource allocation under competing demands." "Reversible experiment versus irreversible commitment." "Continue-or-stop with sunk costs." These are your personal decision types — the categories that actually recur in your life.
Step four: pre-load the pattern. Once you have named a type, the next time you encounter an instance, the name will surface faster. "Ah — this is a timing decision. I know what variables matter here." The classification shortens the path to the framework, which shortens the path to the decision, which conserves the cognitive resources that L-0441 told you are finite and expensive.
This is not theoretical. It is the same process by which chess masters acquire their 50,000 chunks, by which Klein's fireground commanders build their situation libraries, and by which every classification system — human or artificial — converts raw experience into reusable structure. The only difference is that you are doing it deliberately rather than waiting for a decade of incidental learning to do it for you.
Novelty is rarer than you think
The implication of this lesson is uncomfortable for people who pride themselves on approaching each situation with fresh eyes. Fresh eyes have their place — but that place is narrower than most people believe. Genuinely novel decisions exist. They require first-principles reasoning, creative exploration, and tolerance for uncertainty. But they are rare. Five percent of your decisions, maybe ten.
The other ninety percent are variations of types you have already encountered. They do not benefit from fresh eyes. They benefit from fast, accurate classification and the application of pre-designed frameworks. Treating a recurring type as novel is not intellectual rigor. It is cognitive waste — the equivalent of re-deriving the multiplication table every time you need to calculate a tip.
Klein's experienced fireground commanders are not less thoughtful than novices. They are more efficient. They have learned to spend their cognitive resources where those resources add value: on the genuinely ambiguous situations where recognition fails, where the type is unclear, where first-principles reasoning is actually required. For everything else — for the 80% of decisions where the first recognized option is workable — they classify, verify, and execute.
Your goal is the same. Build a library of decision types from your own experience. Learn to see the skeleton beneath the skin. When a decision arrives, your first question should not be "what should I do?" It should be "what type of decision is this?" The type carries the framework. The framework carries the answer. And the cognitive resources you save on the recurring types become available for the decisions that actually need them.
Sources
- Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press.
- Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4(1), 55-81.
- de Groot, A. D. (1965). Thought and Choice in Chess. Mouton.
- Snowden, D. J., & Boone, M. E. (2007). A leader's framework for decision making. Harvard Business Review, 85(11), 68-76.
- Bezos, J. (2015). Letter to shareholders. Amazon.com, Inc.
- Gobet, F., & Simon, H. A. (1998). Expert chess memory: Revisiting the chunking hypothesis. Memory, 6(3), 225-255.
- Anthropic. (2024). Building effective agents. Anthropic Research.