Not all errors are the same error
The previous lesson established that you cannot correct what you cannot detect. But detection alone is not enough. Once you know something went wrong, you face a second question that most people skip entirely: what kind of wrong?
This matters because the correction for one type of error is useless — or actively harmful — when applied to another. Telling a surgeon to "be more careful" after a diagnostic error is like telling a pilot to fly slower after a navigation mistake. The intervention does not match the failure. And when interventions do not match failures, the errors recur, confidence erodes, and people conclude that improvement is impossible when the real problem is that they never identified what needed improving in the first place.
The distinction between error types is not academic. It is the difference between fixing a problem and performing a ritual that feels like fixing a problem.
Reason's taxonomy: slips, lapses, and mistakes
James Reason's Human Error (1990) is the foundational text on error classification, and its central contribution is a taxonomy that sorts errors by their cognitive origin rather than their surface appearance. Reason identified three primary categories: slips, lapses, and mistakes.
Slips are errors of execution. You intended to do the right thing and had the right plan, but your action diverged from your intention. You meant to type "save" and typed "send." You reached for the brake and hit the accelerator. The knowledge was correct. The plan was correct. The execution misfired. Slips are failures of attention — your automatic motor system did something other than what your conscious intention specified.
Lapses are errors of memory. You formed the right intention, but the intention was lost before execution. You walked into the kitchen and forgot why. You meant to attach the file before sending the email. You planned to check the backup before deploying. The plan existed. It simply fell out of working memory before it could be carried out.
Mistakes are errors of judgment or knowledge. Unlike slips and lapses, where the plan was right but execution failed, mistakes involve a plan that was wrong from the start. You executed flawlessly — and the execution produced the wrong outcome because you were operating from incorrect knowledge, flawed reasoning, or a misunderstanding of the situation. The surgery was performed perfectly, on the wrong diagnosis. The code compiled and ran without errors, but the algorithm solved the wrong problem.
Reason's insight was that these three types of error arise from fundamentally different cognitive mechanisms and therefore require fundamentally different countermeasures. Slips require environmental design — forcing functions, interlocks, confirmation dialogs. Lapses require external memory — checklists, reminders, written procedures. Mistakes require better models — updated knowledge, improved reasoning processes, or external calibration from others who see the situation differently (Reason, 1990).
Rasmussen's three levels: skill, rule, and knowledge
Jens Rasmussen, a Danish engineer who spent decades studying errors in industrial control rooms, proposed a complementary framework in his 1983 paper "Skills, Rules, and Knowledge: Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models." Where Reason classified errors by type, Rasmussen classified the performance level at which errors occur.
Skill-based performance is automatic. You type on a keyboard, shift gears in a car, or walk down stairs without conscious deliberation. Errors at this level are Reason's slips and lapses — the automatic system misfires. Skill-based errors have the highest frequency but the lowest severity, because the task is routine and the correction is usually immediate. You notice the typo. You feel the wrong gear. The feedback loop is tight.
Rule-based performance relies on stored procedures. You follow a recipe, execute a deployment checklist, or apply a diagnostic protocol. Errors at this level occur when you apply the wrong rule to the situation — you follow the procedure for Problem A when you are actually facing Problem B. The execution of the rule is flawless. The selection of the rule is wrong. These errors are more dangerous than skill-based errors because the performer is confident they are doing the right thing. The checklist was followed perfectly; it was the wrong checklist.
Knowledge-based performance is required when no stored rule applies. You face a novel situation and must reason from first principles. Errors at this level are the most severe and the hardest to detect. You are constructing a mental model of an unfamiliar problem, and if that model is wrong, every action you derive from it will be wrong — even as each action feels logical and well-reasoned. Knowledge-based errors are where catastrophic failures live, because the performer is doing their best thinking and their best thinking is built on a flawed foundation (Rasmussen, 1983).
The critical pattern across both Reason and Rasmussen: as you move from execution to judgment to knowledge, errors become less frequent, harder to detect, and more consequential. You make a hundred typos for every strategic miscalculation — but the typos cost you seconds and the miscalculation costs you years.
The three-category model for personal epistemology
Synthesizing Reason and Rasmussen into a framework you can use daily, every error you encounter falls into one of three categories:
Execution errors. You knew what to do. You had the right plan. Something went wrong in the doing. You forgot a step, made a mechanical mistake, or lost focus at the critical moment. The correction is procedural: build checklists, automate the step, redesign the environment to make the error physically difficult. Execution errors are the easiest to fix because the knowledge is already present — you just need to ensure the knowledge reaches the action reliably.
Knowledge errors. You did not have the information you needed. You made a decision based on an incomplete or inaccurate model of the situation. The correction is epistemic: seek new information, consult domain experts, read the documentation you skipped, or update the mental model that led you astray. Knowledge errors cannot be fixed by trying harder or being more careful — they require new inputs that your system did not previously contain.
Judgment errors. You had the right information but assessed it incorrectly. You saw the warning signs and dismissed them. You weighed two factors and got the weighting backwards. You predicted an outcome and the prediction was systematically wrong. The correction is calibrational: introduce external perspectives, track your predictions against outcomes over time, build pre-mortem rituals that force you to consider failure scenarios you would otherwise dismiss. Judgment errors are the hardest to fix because they feel indistinguishable from correct reasoning in the moment.
Philip Tetlock's research on forecasting accuracy, published in Superforecasting (2015), demonstrated this distinction empirically. Tetlock found that the best forecasters were not those with the most information (knowledge) or the most discipline (execution), but those who systematically calibrated their judgments — tracking their predictions, measuring their accuracy, and adjusting their reasoning processes based on the results. Judgment errors require a fundamentally different correction mechanism than knowledge errors or execution errors.
The software engineering parallel
Software engineering has independently discovered the same taxonomy, expressed in different language.
Syntax errors are execution errors. The programmer knows what they want the code to do. The language rules are understood. But a semicolon is missing, a variable is misspelled, or a parenthesis is unbalanced. The compiler catches these immediately. They are the cheapest errors in software — detected automatically, fixed in seconds, zero ambiguity about what went wrong. Syntax errors map directly to Reason's slips: the intention was correct, the mechanical expression was flawed.
Logic errors are judgment errors. The code compiles. It runs without crashing. It produces the wrong output because the algorithm encodes a flawed assessment of the problem. The sorting function runs in the wrong direction. The boundary condition is off by one. The business rule implements a misunderstanding of the requirement. Logic errors are expensive because the system gives no signal that anything is wrong — the code runs confidently and incorrectly, just as a person executing a flawed judgment feels confident and rational while producing bad outcomes.
Runtime errors sit between execution and knowledge errors. They emerge from conditions the programmer did not anticipate — null references, division by zero, network timeouts. These are failures of the programmer's mental model: they did not know (or did not consider) that the input could take a particular form. The fix is not to "be more careful" when coding; it is to expand the model of possible states the system can encounter.
The fact that software engineering arrived at the same structural taxonomy through completely different pressures — compiler design, debugging practice, production incident analysis — is evidence that this taxonomy reflects something real about the nature of errors, not just a convenient classification scheme.
The AI parallel: bias, variance, and irreducible error
Machine learning formalizes error decomposition with mathematical precision. Every model's total error decomposes into three components: bias, variance, and irreducible error.
Bias is the equivalent of a knowledge error. A high-bias model is too simple to capture the true pattern in the data. It underfits — it has the wrong structural assumptions about reality. No amount of additional training data will fix a biased model, just as no amount of effort will fix a knowledge error. The model needs a different architecture. The person needs different information.
Variance is the equivalent of a judgment error. A high-variance model is overly sensitive to the specific training data it has seen. It overfits — it reads signal in noise, finding patterns that do not generalize. The model had access to the right information but drew the wrong conclusions, weighting idiosyncratic details over genuine structure. The fix is regularization: constraining the model's flexibility, forcing it to ignore noise. For humans, the equivalent is calibration — constraining your own tendency to over-interpret limited evidence.
Irreducible error is the noise floor — the randomness inherent in any real system that no model, no matter how perfect, can predict. This maps to the category of errors that are genuinely not your fault: the market moved randomly, the flight was canceled by weather, the patient had an atypical presentation that no diagnostic framework would have caught. Distinguishing irreducible error from bias and variance is essential because treating random noise as a fixable error leads to overfitting in machines and superstitious behavior in humans (Hastie, Tibshirani, & Friedman, 2009).
The machine learning decomposition adds one crucial insight to the human error taxonomy: some errors are not correctable. Part of distinguishing error types is recognizing which errors demand correction and which demand acceptance.
The diagnostic protocol
When something goes wrong, run this sequence before attempting any correction:
Step 1: Name the outcome. What specifically happened that should not have happened, or did not happen that should have? Be precise. "The project failed" is not a diagnosis. "The project delivered two weeks late because the API integration took three times longer than estimated" is.
Step 2: Identify the decision point. Where in the sequence of events did the error originate? Not where it became visible — where it was generated. The deployment failure started at the planning meeting where the timeline was set, not at the moment the deadline was missed.
Step 3: Classify the error. At that decision point, what was the nature of the failure?
- Did you know what to do and fail in the doing? Execution error.
- Did you lack information that would have changed your action? Knowledge error.
- Did you have the information but assess it incorrectly? Judgment error.
Step 4: Select the type-matched correction. Execution errors get procedural fixes (checklists, automation, environmental redesign). Knowledge errors get epistemic fixes (new information sources, expert consultation, updated models). Judgment errors get calibrational fixes (prediction tracking, pre-mortems, external review, decision journals).
Step 5: Verify the classification. Ask: if I apply this correction, would it have prevented this specific error? If the answer is no, you have likely misclassified. A checklist does not fix a knowledge gap. New information does not fix a reasoning flaw. Go back to Step 3.
This protocol takes two minutes. It replaces the default response — "I need to try harder" — with a targeted intervention that addresses the actual failure mechanism.
Why most error correction fails
Most people have a single error response: increased effort. Something went wrong, so they resolve to try harder, pay more attention, care more, work longer hours. This is treating every error as an execution error, and it has predictable consequences.
Knowledge errors persist because effort does not create information. You can try as hard as you want to navigate a city without a map, and trying harder will not produce the map. You can pay maximum attention to a set of financial projections, and attention will not reveal the market variable you did not know existed. Knowledge errors demand new inputs, not more energy applied to existing inputs.
Judgment errors persist because effort does not improve calibration. You can work sixty hours a week on a strategy built on a flawed assessment of the competitive landscape, and the additional hours will not fix the assessment. You can deliberate intensely about a decision where your core assumption is wrong, and deliberation will deepen your commitment to the wrong answer. Daniel Kahneman's research on cognitive bias demonstrates that effort and attention often increase confidence in judgments without increasing their accuracy — you become more certain, not more correct (Kahneman, 2011).
The failure to distinguish error types is itself a systematic error — a meta-error that corrupts every correction attempt downstream. Until you can reliably classify what went wrong, your fixes will be random interventions that occasionally work by coincidence and usually address the wrong problem.
From classification to correction speed
Distinguishing error types is not the end of the error correction process. It is the beginning. Once you know what kind of error you are facing, the next question is how quickly can you surface it?
Execution errors are cheap when caught early and expensive when caught late — the typo that becomes a production outage, the missed step that becomes a cascading failure. Knowledge errors compound over time — every decision made from a flawed model makes the eventual correction more costly. Judgment errors are invisible until the predicted outcome fails to materialize, which means the delay between error and detection can be months or years.
This is why the next lesson — fail fast, fail cheap — is the natural complement to error classification. Once you know the type, you can design systems that surface each type at the moment when correction is cheapest. Execution errors need immediate mechanical feedback. Knowledge errors need early reality checks before you commit resources. Judgment errors need structured prediction tracking that forces comparison between expectation and outcome.
Classification tells you what to fix. Speed tells you when to fix it. Together, they form the foundation of every error correction system that actually works.
Sources:
- Reason, J. (1990). Human Error. Cambridge University Press.
- Rasmussen, J. (1983). "Skills, Rules, and Knowledge: Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models." IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(3), 257-266.
- Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown Publishers.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer.
- Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
- Carver, C. S., & Scheier, M. F. (1998). On the Self-Regulation of Behavior. Cambridge University Press.