Question
What is calibration error?
Quick Answer
Your brain does not fail randomly. It fails in a specific, measurable, predictable direction: too much confidence. Across decades of research, in every population tested, the dominant calibration error is overconfidence — believing you know more than you do, that your estimates are more precise.
Calibration error is a concept in personal epistemology: Your brain does not fail randomly. It fails in a specific, measurable, predictable direction: too much confidence. Across decades of research, in every population tested, the dominant calibration error is overconfidence — believing you know more than you do, that your estimates are more precise than they are, and that your performance exceeds what it actually achieves.
Example: A product manager asks her team of eight engineers to estimate how long a migration will take. She asks each person to give a range they are 90% confident will contain the actual duration — meaning they believe there is only a 10% chance the real answer falls outside their range. Every engineer gives a range. The migration takes fourteen weeks. Six of the eight ranges do not include fourteen weeks. Their 90% confidence intervals captured the true value only 25% of the time. This is not an unusual result. In software estimation research, when professionals provide 90% confidence intervals, the true value falls inside those intervals only 60-70% of the time (Jorgensen, 2004). The engineers were not lying or lazy. They were exhibiting the single most robust finding in the psychology of judgment: humans are systematically, predictably, measurably overconfident.
This concept is part of Phase 8 (Perceptual Calibration) in the How to Think curriculum, which builds the epistemic infrastructure for perceptual calibration.
Learn more in these lessons