Practice core cognitive tasks weekly without AI — prevent skill atrophy in capabilities you've delegated, like testing a backup generator
Maintain weekly practice of core cognitive tasks without AI assistance to prevent skill atrophy in capabilities the AI handles routinely, treating this as backup generator testing for cognitive infrastructure.
Why This Is a Rule
This is Periodically perform automated steps manually — maintain intervention skill and detect automation drift before it accumulates's manual calibration principle (periodically perform automated steps manually) applied specifically to AI-delegated cognitive tasks. When AI handles writing, analysis, summarization, or research routinely, the human skills for those tasks atrophy through disuse. Bainbridge's irony of automation (Periodically perform automated steps manually — maintain intervention skill and detect automation drift before it accumulates) applies directly: the more you delegate to AI, the less capable you become of doing it yourself, and the less capable you are of evaluating whether the AI is doing it well.
Weekly practice without AI maintains two critical capabilities: execution skill (you can still write a coherent analysis, summarize a document, or research a topic without AI assistance) and evaluation calibration (you know what good output looks like from the inside, enabling you to judge AI output quality rather than accepting whatever it produces). Both capabilities decay rapidly without practice — research on skill maintenance shows that complex cognitive skills require weekly engagement to maintain.
The "backup generator" metaphor is precise: you don't run the backup generator because the main power is failing — you run it weekly to ensure it works when the main power does fail. AI won't always be available (service outages, privacy constraints, cost limits), and your cognitive capabilities need to be functional when it isn't.
When This Fires
- Weekly, as a scheduled cognitive maintenance practice
- When you notice you can't perform tasks you used to handle easily before AI delegation
- When AI-generated output feels "good enough" but you can't articulate what makes it good or bad
- Complements Periodically perform automated steps manually — maintain intervention skill and detect automation drift before it accumulates (manual calibration for automation) and Keep directional decisions human (what to produce, what argument to make), delegate mechanical execution to AI — augmentation, not replacement (human directional sovereignty) with the AI-specific practice
Common Failure Mode
Full cognitive delegation: letting AI handle all writing, summarization, and analysis. After 6 months, you can't write a coherent paragraph without AI assistance, can't tell whether an AI summary captured the key points, and can't evaluate AI analysis quality because you've forgotten what the analysis process feels like from the inside.
The Protocol
(1) Identify the cognitive tasks you most frequently delegate to AI: writing, summarization, analysis, research, brainstorming. (2) Weekly, select one of these tasks and complete it entirely without AI assistance. Rotate through the list so each task gets practiced monthly. (3) The practice doesn't need to be full-scale: write one paragraph, summarize one article, analyze one data point. The goal is maintaining the neural pathway, not producing production-quality output. (4) Compare your AI-free output to what AI would produce: where did you struggle? What did you do differently? This comparison maintains evaluation calibration. (5) If any cognitive task feels dramatically harder than it did 6 months ago, increase practice frequency for that task from weekly to daily until competence is restored.