Govern your tool use with explicit policies: which tools for which tasks, under what conditions, with what fallbacks
Design tool-use policies that define which tools you allow for which tasks, under what conditions, and with what fallback behavior when tools fail, treating this as governance of your extended cognitive architecture rather than as tool preferences.
Why This Is a Rule
Your tools are part of your cognitive architecture — they extend your thinking, memory, and execution capabilities. Like any architectural component, they need governance: explicit policies about what is used where, under what conditions, and what happens when components fail. Without governance, tool adoption is ad hoc: you reach for whatever is convenient in the moment, building invisible dependencies (Classify tool delegations as appropriate, convenient, or critical — critical delegations create capability gaps when tools fail) without fallback plans.
A tool-use policy has four dimensions: Which tools: what specific tools are authorized for your workflow? Not "whatever works" but a curated, evaluated set. For which tasks: each tool has a domain where it adds value and domains where it introduces risk. Map tools to tasks explicitly. Under what conditions: some tools are appropriate in some contexts but not others. AI assistance for brainstorming (low stakes) vs. AI for financial analysis (high stakes) requires different policies (Scale AI output verification to stakes: skim for brainstorming, spot-check for communications, verify every claim for publication). Fallback behavior: what do you do when the tool is unavailable? If you have no fallback, you have a single point of failure (Classify tool delegations as appropriate, convenient, or critical — critical delegations create capability gaps when tools fail critical dependency).
This reframe — from "I use these tools" to "I govern my extended cognitive architecture" — produces more deliberate choices and more resilient systems.
When This Fires
- When adopting any new cognitive tool (AI assistants, note-taking apps, automation platforms)
- During quarterly tool audits when evaluating your tool portfolio's coherence and resilience
- When a tool failure disrupts your workflow and you realize you had no fallback
- When deciding whether to deepen or limit reliance on a specific tool
Common Failure Mode
Ad hoc tool adoption: each new tool is adopted individually without considering how it fits the broader architecture. You end up with 12 partially overlapping tools, no clear policies about which to use when, and critical dependencies on tools you adopted casually.
The Protocol
(1) Inventory your current tool portfolio: every tool you use for cognitive work. (2) For each tool, define: Task scope: what specific tasks is this tool authorized for? Conditions: under what stakes/context is this tool appropriate? (Scale AI output verification to stakes: skim for brainstorming, spot-check for communications, verify every claim for publication) Dependency classification: appropriate, convenient, or critical? (Classify tool delegations as appropriate, convenient, or critical — critical delegations create capability gaps when tools fail) Fallback: what do you do if this tool is unavailable? (3) For critical dependencies with no fallback → either develop the fallback (Periodically perform critical delegated tasks without the tool — maintaining the biological skill is redundancy, not inefficiency) or invest in tool redundancy. (4) Document the policies. They should be referrable, not memorized. (5) Review the portfolio quarterly: are policies being followed? Have new tools been adopted without governance? Have existing tools shifted from convenient to critical?