Periodically perform automated steps manually — maintain intervention skill and detect automation drift before it accumulates
Periodically perform automated steps manually as calibration exercises to maintain intervention skill and detect when automation has drifted or is producing incorrect outputs.
Why This Is a Rule
Lisanne Bainbridge identified the fundamental irony of automation in 1983: the more reliable the automation, the less practice the human operator gets, and the less capable they become of intervening when the automation fails. Applied to personal workflows, the person who automates their data pipeline stops understanding how the pipeline works, stops noticing when outputs look wrong, and loses the ability to run the process manually when the automation breaks.
Periodic manual execution serves two functions simultaneously. First, skill maintenance: performing the automated step by hand keeps the procedural knowledge alive. You remember what the inputs look like, what the processing involves, what correct output looks like, and how to diagnose problems. This knowledge atrophies rapidly without practice. Second, drift detection: manually executing a step and comparing your output to the automation's output reveals discrepancies that passive monitoring might miss. The automation might be producing subtly wrong output that passes all automated checks but fails the human "does this look right?" test.
This is the same principle behind airline pilots periodically hand-flying rather than always using autopilot, or surgeons maintaining skills on procedures that robots now assist with. The automation handles the routine; the human handles the exception — but only if the human has maintained the skill to do so.
When This Fires
- When you have automated steps that you haven't performed manually in months
- When automation fails and you realize you've forgotten how to do the step by hand
- When automated output "seems off" but you can't tell because you've lost calibration for what "right" looks like
- Complements Sovereignty check before automation — if the automation produces wrong output, will you notice? Automate only when the answer is yes (sovereignty check) with the ongoing skill-maintenance practice
Common Failure Mode
Complete delegation: "The automation handles it, I don't need to know how it works anymore." This is fine until the automation breaks, produces subtly wrong output, or needs modification. At that point, you've lost both the ability to intervene and the knowledge to evaluate what's wrong. The failure compounds because you may not even recognize the automation has drifted.
The Protocol
(1) For each automated step, set a manual calibration cadence: monthly for critical automation, quarterly for routine automation. (2) During calibration: perform the step entirely by hand, from inputs to outputs. (3) Compare your manual output to the automation's most recent output. Discrepancies indicate either automation drift (fix the automation) or your own skill drift (note what you'd forgotten). (4) During manual execution, note anything confusing, unclear, or surprising — these are knowledge gaps that would block you during an actual automation failure. (5) Update your automation documentation with anything you discovered during the manual run. If you needed information that wasn't in the docs, add it.