Track workflow quality with tension-paired metrics (cycle time + error rate + energy cost) — single-metric optimization corrupts the system
Track workflow quality using multiple metrics that create tension with each other—at minimum cycle time, error rate, and energy cost—to prevent optimization of any single metric from corrupting the system.
Why This Is a Rule
Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. Optimize a workflow for cycle time alone and you'll cut corners, skip quality checks, and burn out. Optimize for error rate alone and you'll add so many checks that the workflow takes three times as long. Optimize for energy cost alone and you'll automate everything, losing the judgment-dependent quality that manual steps provided.
The solution is tension-paired metrics: metrics that pull in opposite directions, creating a constraint space where the workflow must satisfy all three simultaneously. Cycle time pushes toward speed. Error rate pushes toward thoroughness. Energy cost pushes toward sustainability. No single optimization can game all three — speeding up increases errors, reducing errors slows things down, and doing either recklessly increases energy cost. The workflow must find the genuine improvement that improves one metric without degrading the others.
This is the same principle behind balanced scorecards in business (financial + customer + process + learning metrics) and the iron triangle in project management (scope + time + cost). A single metric always gets gamed; multiple tension-paired metrics force genuine optimization rather than metric manipulation.
When This Fires
- When setting up measurement for a workflow you want to improve over time
- When optimizing a workflow and noticing that one metric improves while others degrade
- When a workflow feels "optimized" but produces burnout or quality problems
- Complements Measure actual elapsed time over 3-4 cycles before optimizing — felt difficulty systematically misidentifies bottlenecks (measure before optimizing) with guidance on what to measure
Common Failure Mode
Speed-only optimization: "We reduced cycle time by 40%!" — but error rate doubled and the executor is exhausted. The cycle time metric looks great in isolation, but the workflow is worse overall. Without counter-metrics, this degradation is invisible in the measurement system.
The Protocol
(1) For each workflow, track at minimum three metrics: Cycle time (wall-clock start to finish), Error rate (defects, revisions, or rework per execution), and Energy cost (subjective 1-5 rating of how depleting the execution was). (2) After each improvement, check all three metrics. A genuine improvement either improves one without degrading others, or improves two while slightly degrading the third. (3) Reject changes that dramatically improve one metric while degrading another — these are metric games, not real improvements. (4) If you must prioritize, use context: cycle time matters most under deadline pressure, error rate matters most for high-stakes outputs, energy cost matters most for daily-frequency workflows. (5) Add domain-specific counter-metrics as needed: output quality score, handoff satisfaction, learning rate. The more metrics that must be satisfied simultaneously, the harder it is to game the system.