Monitor your error detection system itself — track what you catch vs. what slips through to detect detection failures
Build error detection infrastructure that monitors both your primary outputs and your detection system's own performance, tracking what errors you catch versus what you miss through other means to detect detection failures.
Why This Is a Rule
An error detection system that you trust but never verify is a single point of failure. If the detection system itself has blind spots — types of errors it systematically misses — you'll have confident but false assurance that everything is fine. This is the "who watches the watchmen" problem applied to personal and organizational systems.
The solution is meta-detection: tracking the detection system's own performance. Every time an error is found through a channel other than your primary detection system (a colleague spots it, a customer reports it, you notice it by accident), that's a miss — an error your detection system should have caught but didn't. The ratio of caught-by-system to missed-by-system reveals the detection system's actual coverage.
In software reliability engineering, this is called "escaped defect analysis" — studying the bugs that escaped testing to improve testing. The same logic applies to any detection system: code review processes, financial audits, health monitoring, behavioral agent reviews. If 40% of errors are caught by your detection system and 60% are discovered by other means, your detection system covers 40% of your error surface — far less than the 90%+ you probably assumed.
When This Fires
- When building any error detection or quality assurance system
- When you discover an error through an unexpected channel — log it as a detection miss
- During detection system audits when evaluating coverage
- When you have high confidence in a detection system but haven't verified it empirically
Common Failure Mode
Assuming detection works because you haven't seen errors: "Our review process must be catching everything because nothing is slipping through." This confuses absence of evidence with evidence of absence. If your detection system misses errors and no other channel catches them either, you'll never see the errors — they silently degrade quality while you maintain false confidence.
The Protocol
(1) For each detection system (code review, financial review, behavioral agent audit, health monitoring), track two categories: Caught: errors detected by the system as designed. Escaped: errors discovered through other channels (customer reports, colleague observations, random discovery, downstream failures). (2) Calculate detection coverage: caught ÷ (caught + escaped). (3) If coverage < 80% → the detection system has significant blind spots. Analyze the escaped errors: what type were they? What channel revealed them? Why didn't the primary system catch them? (4) Improve the detection system to cover the escaped error types. (5) Re-measure after improvement. Detection coverage should increase with each cycle of meta-detection.