Keep directional decisions human (what to produce, what argument to make), delegate mechanical execution to AI — augmentation, not replacement
When using AI to reduce production costs, maintain human control over directional decisions (what to produce, what argument to make, what judgment to render) while delegating mechanical execution (format conversion, draft expansion, repurposing) to preserve sovereignty through augmentation not replacement.
Why This Is a Rule
AI tools can dramatically reduce production costs — a draft that took 3 hours can be produced in 30 minutes with AI assistance. But the cost reduction creates a sovereignty risk: if the AI is making the directional decisions (what argument to make, what evidence to prioritize, what conclusion to draw), the output reflects the AI's judgment, not yours. You become a publisher of AI-generated opinions rather than an augmented producer of your own thinking.
The directional vs. mechanical distinction provides the boundary. Directional decisions are where your judgment, values, and expertise create unique value: choosing the topic, framing the argument, selecting which evidence matters, rendering professional judgments, and deciding what matters. These must remain human because they're what makes your output yours. Mechanical execution is where AI saves time without compromising sovereignty: expanding outlines into drafts, converting formats (Rewrite for each format — don't copy-paste across mediums, because each requires different cognitive packaging for its audience), formatting, repurposing across platforms (Cascade derivative formats over days/weeks after the pillar ships — extend content life, prevent fatigue, create multiple touchpoints), and processing data into presentation-ready form. These tasks don't require your judgment — they require execution skills that AI handles efficiently.
This is Insert AI as a transformation step between workflow stages — provide output + instruction, never delegate decisions's AI-as-transformation principle applied to the full production workflow: AI transforms your directional inputs into mechanical outputs, but never makes the directional choices that determine what gets produced and what it argues.
When This Fires
- When integrating AI tools into your production workflow
- When AI-generated content feels "off" despite being technically competent — likely the AI made directional choices
- When deciding which production steps to automate and which to keep manual (Four-category automation triage: automate now / automate later / assist / keep manual — classify by judgment requirement and frequency)
- Complements Sovereignty check before automation — if the automation produces wrong output, will you notice? Automate only when the answer is yes (sovereignty check) and Insert AI as a transformation step between workflow stages — provide output + instruction, never delegate decisions (AI as transformation step) with the production-specific boundary
Common Failure Mode
Directional delegation: "AI, write an article about productivity." The AI produces a competent article with generic arguments that don't reflect your specific perspective, experience, or judgment. You publish it because it's "good enough," but it could have been produced by anyone — your unique voice and thinking are absent.
The Protocol
(1) For each production step, classify it as directional (requires your judgment) or mechanical (requires execution without judgment). (2) Keep all directional steps human: choose the topic yourself, decide the argument yourself, select the evidence yourself, form the conclusions yourself. (3) Delegate mechanical steps to AI: "Here's my outline and key arguments — expand this into a draft." "Here's my article — reformat it for LinkedIn." "Here's my data — produce a visualization." (4) Always review AI outputs before delivery (Sovereignty check before automation — if the automation produces wrong output, will you notice? Automate only when the answer is yes): the AI may have made implicit directional decisions during mechanical execution. Correct any divergence from your intended direction. (5) The test: "Could someone reading this output identify it as distinctly mine, reflecting my specific thinking?" If yes, sovereignty is preserved. If it could have been produced by any AI user, the directional decisions were delegated.