Sovereignty check before automation — if the automation produces wrong output, will you notice? Automate only when the answer is yes
Before automating any step, apply the sovereignty check by asking 'if this automation produces wrong output, will I notice?' — automate only when the answer is yes through review, downstream dependencies, or verification checkpoints.
Why This Is a Rule
Automation without oversight is abdication. The distinction is whether you maintain the ability to detect and correct errors in the automated output. Automation that you review, that feeds into a step where errors become visible, or that has verification checkpoints is genuine delegation — you've freed up execution time while retaining quality control. Automation that runs silently with no error-detection mechanism is a black box that could be producing garbage indefinitely without your knowledge.
The sovereignty check is a single question: "If this automation produces wrong output, will I notice?" Three mechanisms can produce a "yes": Review (you inspect the output before it matters), downstream dependency (a subsequent step fails or produces visible anomalies if the input was wrong), or verification checkpoint (the automation includes self-checks that alert on anomalies). If none of these three exists, wrong output propagates undetected — and the longer it propagates, the more damage accumulates and the harder remediation becomes.
This is particularly critical for AI-assisted automation, where outputs can be confidently wrong in ways that don't trigger obvious error signals. An automated email filter that miscategorizes messages produces no error — the messages just silently go to the wrong folder. An automated data pipeline that corrupts values produces no error — the downstream reports just silently become wrong. The sovereignty check ensures you've thought about how errors would surface before they start accumulating.
When This Fires
- Before implementing any automation, especially for steps classified as "automate now" (Four-category automation triage: automate now / automate later / assist / keep manual — classify by judgment requirement and frequency)
- When existing automation has been running for a while and you're not sure if it's still producing correct output
- When adding AI-generated content to a workflow without human review gates
- When trust in an automated step has become assumption rather than verified fact
Common Failure Mode
"Set it and forget it" automation: building the automation, verifying it works once, and never checking again. The automation works correctly for weeks, then a dependency changes (an API updates, a file format shifts, a naming convention evolves) and the output silently degrades. By the time anyone notices, months of corrupted output exist.
The Protocol
(1) Before automating any step, ask: "If this automation produces wrong output, how would I know?" (2) Identify the detection mechanism: Do I review the output? Does a downstream step fail visibly? Does the automation itself include verification? (3) If no detection mechanism exists → do not automate until you've built one. This might be a periodic manual spot-check, an automated validation step, or a downstream dependency that breaks visibly on bad input. (4) For each automated step, set a review cadence: daily for high-frequency automation, weekly for medium, monthly for low. The review confirms the automation is still producing correct output, not just running without errors. (5) Apply extra scrutiny to "invisible" automation — anything that runs in the background without producing output you regularly inspect. These are the highest-risk automation targets because errors accumulate silently.