Start weekly reviews with "What did my tools help me produce?" before reviewing configuration — anchor evaluation in outcomes, not activity
Begin weekly reviews by stating what your tools helped you produce before reviewing tool configuration or optimization, to anchor tool evaluation in outcomes rather than activity.
Why This Is a Rule
Tool reviews that begin with configuration ("What should I change in my setup?") produce configuration changes regardless of whether the current setup is producing good outcomes. The review becomes about tool tinkering rather than output assessment. Starting with outcomes ("What did I produce this week with my tools?") anchors the entire review in results: if the tools produced good output, major configuration changes are unnecessary. If output was poor, the configuration review has a specific problem to address.
This is the anchoring effect used constructively: the first question in any review sets the frame for everything that follows. "What did my tools help me produce?" frames subsequent discussion as "How can tools better serve production?" — an outcome-oriented frame. "What should I optimize in my tool setup?" frames subsequent discussion as "What configuration changes would be interesting?" — an activity-oriented frame that can spiral into endless tinkering.
The sequencing also prevents When pipeline maintenance costs more than the pipeline saves, the economics have inverted — simplify, don't optimize's economic inversion: if your tools produced strong output this week, spending review time on configuration optimization is unnecessary overhead. The outcome assessment makes the configuration review conditional: only dive into tool optimization when outcomes indicate a problem.
When This Fires
- At the start of weekly reviews that include any tool or system assessment
- When weekly reviews consistently produce configuration changes but no output improvement
- When tool tinkering has become a substitute for actual production
- Complements Open weekly planning by reviewing plan vs. actuals — identify the single biggest gap without judgment, then make one structural fix (weekly plan-vs-actual review) with the tool-specific assessment sequence
Common Failure Mode
Configuration-first reviews: spending 30 minutes adjusting Obsidian plugins, reorganizing Notion databases, and testing new Todoist features before ever asking "Did I ship anything this week?" The configuration changes feel productive but have no connection to output quality.
The Protocol
(1) Open your weekly review with: "What outputs did my tool stack help me produce this week?" List specific deliverables, not activities. (2) Rate the output quality: was it good? On time? Sufficient? (3) Only if output quality was below expectations → investigate tools as a potential cause: "Which tool-related friction contributed to the output gap?" (4) If output quality was adequate or better → skip tool configuration entirely this week. The tools are serving their purpose. (5) When tool changes are warranted, make the smallest change that addresses the specific identified gap. No speculative improvements; no "while I'm in here I'll also change...".