After one cycle, review the corrector itself — did the correction actually prevent the target error, or does the corrector need correcting?
After deploying a self-correcting mechanism for one cycle period, add a meta-correction review asking whether the correction actually prevented the target error, adjusting the corrector itself if it failed.
Why This Is a Rule
A self-correcting mechanism — a checklist, an automated check, a behavioral agent designed to prevent a specific error — is itself a system that can fail. If you deploy a correction and never verify that it actually corrects, you have false security: you believe the error is prevented while the correction mechanism silently fails and the error continues.
This is Monitor your error detection system itself — track what you catch vs. what slips through to detect detection failures (monitor the detector) applied to correction: monitor the corrector. After one cycle period (defined by the correction's intended frequency), ask the meta-question: "Did the target error occur despite the correction? If it did, the correction mechanism itself needs diagnosis and repair.
The one-cycle timing is deliberate: too early (checking before the correction has had a chance to encounter the error condition) produces false confidence; too late (checking after several cycles of uncorrected errors) allows damage to accumulate. One complete cycle is the minimum meaningful test: the error condition should have been encountered at least once, and the correction should have had the opportunity to fire.
When This Fires
- After deploying any new correction mechanism (checklist, agent, automated check, process change)
- After one full cycle of the correction's intended operation
- When you assume a correction is working but haven't verified it empirically
- Complements Validate new feedback mechanisms in the first cycle: does data reveal unknowns, is effort sustainable, can you specify one adjustment? (first-cycle validation) for correction mechanisms specifically
Common Failure Mode
Deploying corrections and assuming they work: "I added a checklist step for that error — problem solved." Did you verify? Did the error's target conditions occur during the cycle? Did the checklist step fire? Did it actually prevent the error? Without meta-correction review, "problem solved" is an assumption, not a verified fact.
The Protocol
(1) When deploying any self-correcting mechanism, note: what specific error is it designed to prevent? How often does that error's condition typically occur? (2) After one full cycle (the period in which the error condition should appear at least once), conduct meta-review: Did the target conditions occur? If no → the cycle wasn't a valid test. Extend observation. If yes → did the correction fire? Check whether the mechanism activated when it should have. Did the correction prevent the error? Check whether the error actually didn't occur (success) or occurred despite the correction (correction failure). (3) If the correction prevented the error → validated. Continue operating. (4) If the correction failed → diagnose the correction mechanism itself. Was the detection faulty? Was the correction action wrong? Was the timing off? Fix the corrector before declaring the error "handled."