Validate new feedback mechanisms in the first cycle: does data reveal unknowns, is effort sustainable, can you specify one adjustment?
For each feedback mechanism you build, verify within the first cycle that the data reveals something unknown, that measurement effort is sustainable, and that you can specify one concrete adjustment based on results—if any component fails, redesign before continuing.
Why This Is a Rule
Most feedback mechanisms die within the first month — not from design flaws in later iterations but from foundational problems visible in the first cycle. The three first-cycle checks catch the three fatal design errors before you invest sustained effort in a mechanism that can't work.
Does the data reveal something unknown? If the first cycle's data confirms only what you already knew, the mechanism isn't adding information — it's adding measurement overhead to existing knowledge. A word-count tracker that confirms "I already know I write about 500 words per session" isn't useful unless it also reveals when, where, or under what conditions you deviate.
Is the measurement effort sustainable? If the first cycle's measurement takes more effort than the activity being measured, the mechanism will be abandoned within weeks. Sustainability must be verified empirically, not estimated during design — designers systematically underestimate the ongoing cognitive cost of tracking (Accountability reporting must be near-zero effort — a checkbox or emoji, not a paragraph — because reporting friction kills the whole system).
Can you specify one concrete adjustment? If you can't translate the first cycle's data into at least one specific behavioral change, the mechanism produces data without actionability (Translate every discrepancy into a specific behavioral adjustment for the next cycle — awareness without adjustment is an incomplete loop). Redesign to measure something that maps directly to adjustable behavior.
When This Fires
- After the first cycle of any newly built feedback mechanism
- When deciding whether to continue investing in a tracking system after initial deployment
- During feedback mechanism design reviews — build the three checks into the validation plan
- When a tracking system "doesn't feel worth it" after initial use — formalize the evaluation with these three checks
Common Failure Mode
Continuing a feedback mechanism past the first cycle despite failing one or more checks: "I'll give it more time — maybe it'll get better." If the mechanism doesn't reveal unknowns, isn't sustainable, or doesn't produce actionable adjustments in the first cycle, more cycles won't fix the fundamental design. Redesign immediately rather than investing effort in a flawed mechanism.
The Protocol
(1) After the first complete cycle of a new feedback mechanism, run three checks: Novelty: did the data reveal something I didn't already know? If no → the mechanism isn't adding information. Redesign to measure something that varies in ways you can't predict. Sustainability: was the measurement effort manageable? Could I sustain this for 3 months without heroic effort? If no → simplify the measurement until it's effortless (Accountability reporting must be near-zero effort — a checkbox or emoji, not a paragraph — because reporting friction kills the whole system). Actionability: based on this data, can I specify one concrete behavioral adjustment? If no → the data isn't connected to adjustable behavior. Redesign to measure something you can actually change. (2) If all three pass → continue. The mechanism is viable. (3) If any fails → redesign the specific failing component before the second cycle. Don't proceed with a known-flawed mechanism.