No measurement, no feedback. No feedback, no correction.
You already understand that negative feedback loops stabilize systems and that positive feedback loops amplify them. But none of that matters if you cannot measure what is happening inside the system. Measurement is the prerequisite for every feedback loop you will ever build. Without it, you are steering blind.
This is not a metaphor. It is a structural claim about how systems work. A thermostat without a thermometer is a box on a wall. A sprint retrospective without velocity data is a feelings session. A fitness plan without a scale, a stopwatch, or a rep counter is a hope. The feedback loop has four parts — sensor, comparator, actuator, and process — and measurement is the sensor. Remove it, and the loop does not degrade. It ceases to exist.
The question is not whether you believe measurement matters. Almost everyone agrees it matters. The question is whether you have actually built measurement into the processes you care about most — your thinking, your work, your habits, your growth. Most people have not. And the absence of measurement is invisible, which is why it persists.
The intellectual lineage: from Kelvin to Grove
The conviction that measurement is the gateway to knowledge has a long history, and that history is more contested than most people realize.
Lord Kelvin set the tone in 1883, in a lecture on electrical units of measurement to the Institution of Civil Engineers: "When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind." Kelvin was speaking about physics, but the principle migrated. Within a century it had colonized management, education, healthcare, and personal development.
The phrase most people associate with this idea — "what gets measured gets managed" — is almost universally attributed to Peter Drucker. There is one problem: Drucker never said it. The Drucker Institute confirmed this in 2013, through a piece by neuro-economist Paul Zak. What Drucker actually wrote, in Management: Tasks, Responsibilities, Practices (1974), was more nuanced: "Work implies not only that somebody is supposed to do the job, but also accountability, a deadline and, finally, the measurement of results — that is, feedback from results on the work and on the planning process itself." Notice what Drucker emphasized: measurement as feedback, not measurement as control.
The real origin of "what gets measured gets managed" traces to V.F. Ridgway's 1956 paper "Dysfunctional Consequences of Performance Measurements" in the Administrative Science Quarterly. And here is the irony that most people miss: Ridgway was making a cautionary argument. His research showed that quantitative measures, used indiscriminately, produce side effects that outweigh benefits. Single measures motivate people to behave in ways that are "wasteful and detrimental to the goals espoused." The full version of the phrase, as journalist Simon Caulkin later summarized Ridgway's work, reads: "What gets measured gets managed — even when it's pointless to measure and manage it, and even if it harms the purpose of the organisation to do so."
W. Edwards Deming, the statistician behind the Total Quality Management revolution, is also frequently credited with saying "you can't manage what you can't measure." He said the opposite. In Out of the Crisis (1982), Deming wrote: "The most important figures that one needs for management are unknown or unknowable, but successful management must nevertheless take account of them." One of his seven deadly diseases of management was running a company on visible figures alone.
This is worth sitting with. The people most associated with measurement-driven management were the ones most careful about its limits. Measurement is essential. Measurement without judgment is dangerous. Both are true.
The person who made measurement operational
If Kelvin gave measurement its philosophical foundation and Ridgway gave it its warning label, Andy Grove gave it operational teeth.
In High Output Management (1983), Grove — then CEO of Intel — argued that management is fundamentally about process control. "Everything is process," he wrote, "whether you're compiling code, hiring staff, or making breakfast." And process control requires measurement. But Grove was specific about how to measure. He advocated selecting "a small number of objective, quantifiable measures of output" — not measuring everything, but measuring the right things.
Grove introduced two principles that separate productive measurement from bureaucratic overhead. First: use paired indicators, so that you measure both effect and counter-effect. If you measure speed, also measure quality. If you measure output volume, also measure defect rate. Pairing prevents the optimization death spiral where improving one metric destroys another. Second: prefer leading indicators over lagging ones. A leading indicator tells you where the process is heading; a lagging indicator tells you where it already went. By the time revenue declines, the decisions that caused the decline happened months ago. Measurement is only useful for feedback if it arrives in time to act on.
Grove's framework became the foundation for OKRs — Objectives and Key Results — which he developed at Intel in the 1970s. John Doerr, who learned the system from Grove at Intel, later brought it to Google and codified it in Measure What Matters (2018). The OKR structure is a direct implementation of measurement-as-feedback-loop: an objective defines the direction, and key results define the measurable signals that confirm you are moving in that direction. As Doerr put it: "The key result has to be measurable. At the end you can look, and without any arguments: Did I do that or did I not do it?"
The discipline of OKRs is not goal-setting. It is building sensors into your objectives so that reality can correct your trajectory. Without the measurable key results, an objective is a wish.
The quantified self: measurement turned inward
The same principle applies to personal systems. In 2007, Wired editors Gary Wolf and Kevin Kelly coined the term "quantified self" to describe a growing community of people who tracked their own biology, behavior, and psychology with data. Wolf, in a 2010 TED talk, framed the movement as a shift from thinking of personal data as a window onto activity to thinking of it as a mirror — a tool for self-reflection, learning, and personal insight.
The quantified self movement demonstrated something that organizations had already learned: measurement changes behavior not primarily through information but through attention. When you track your sleep, you start noticing what affects it. When you log your meals, you eat differently — not because the data told you something you did not already know, but because the act of measuring forced you to observe what you were previously ignoring.
This is the mechanism that makes measurement the prerequisite for feedback loops. Measurement creates a surface for observation. Observation creates the raw material for comparison. Comparison creates the signal for correction. Without the first step, the entire chain collapses. You cannot build a feedback loop around something you are not measuring, because the loop has no input.
The quantified self also revealed the limits. Many early practitioners tracked dozens of variables — steps, heart rate, sleep stages, mood, caffeine, screen time — and discovered that more data did not automatically produce more insight. The data had to be connected to a question. What am I trying to improve? What would tell me if the improvement is working? Without that question, measurement becomes compulsive data collection — the personal equivalent of Ridgway's dysfunctional performance metrics.
The AI parallel: observability or blindness
If you work in technology, you have already encountered this principle in its most concrete form. Modern distributed systems — microservices, cloud infrastructure, AI pipelines — are too complex to manage by reading code or watching dashboards. They require observability: the ability to understand a system's internal state by analyzing the data it produces.
Observability rests on three pillars. Logs are timestamped records of discrete events — what happened and when. Metrics are numeric measurements collected over time — request latency, error rates, CPU utilization, memory usage. Traces follow a single request across multiple services, revealing where time is spent and where failures cascade. Together, these three signals give you a feedback surface. Without them, you are operating a complex system by guesswork.
The parallel to personal and organizational processes is exact. Logs are your journal entries, meeting notes, and decision records — discrete events captured in time. Metrics are the numbers you track — words written per day, revenue per quarter, hours of deep work per week. Traces are the through-lines that connect cause to effect — this decision led to that outcome through these intermediate steps. Without all three, you have blind spots.
The AI domain makes this even more vivid. Large language models do not fail by crashing. They fail by confidently producing wrong answers, by drifting in quality over time, by hallucinating citations that do not exist. Traditional monitoring — "is the server up?" — is useless for these failure modes. You need evaluation metrics: faithfulness, relevance, coherence, safety scores. You need human feedback loops that ground automated assessments in real judgment. You need tracing that shows which prompts, retrieval steps, and model calls produced which outputs. As one 2025 industry report from Confident AI put it, distributed tracing, token accounting, automated evals, and human feedback loops are now baseline requirements for production AI systems.
The principle does not change between a thermostat and a trillion-parameter model. If you cannot measure the outcome, you cannot build a feedback loop around it.
Goodhart's Law: when measurement corrupts itself
Measurement is necessary. It is not sufficient. And it is not safe.
Charles Goodhart, a British economist, observed in 1975 that "any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." Anthropologist Marilyn Strathern later generalized this into the version most people know: "When a measure becomes a target, it ceases to be a good measure."
This is not a theoretical concern. It is the single most common failure mode of measurement systems. A call center measures average handle time, and agents start hanging up on difficult customers. A school measures standardized test scores, and teachers narrow the curriculum to test preparation. A software team measures lines of code, and developers write verbose implementations. A startup measures monthly active users, and the product team builds engagement loops that increase screen time without increasing value.
Donald Campbell, a social psychologist, formalized this independently in 1969 with what became Campbell's Law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
Goodhart's Law does not mean you should stop measuring. It means you should measure with awareness that the act of measurement changes the system being measured. Grove's paired indicators are one defense: measure the thing you want to improve and the thing that might degrade if you over-optimize. Another defense is measuring outcomes rather than activities. Measure whether customers got their problem solved, not how quickly the agent closed the ticket. Measure whether the team shipped working software, not how many story points they completed.
The deeper defense is treating every metric as a signal, not a score. A signal tells you something about reality. A score tells you something about performance. When people are evaluated by a score, they optimize the score. When they use a signal to understand what is happening, they optimize the outcome.
How to build measurement into your processes
Here is the practical framework. Every process you run — a project, a habit, a creative practice, a team workflow — can have measurement built into it by answering four questions:
1. What outcome am I trying to produce? This is the objective. Be specific. "Get healthier" is not measurable. "Reduce resting heart rate by 5 bpm over 90 days" is. "Write better code" is not measurable. "Reduce production incidents caused by my code to zero for three consecutive sprints" is.
2. What signal would tell me if I am moving toward that outcome? This is the key result, the metric, the sensor. Choose signals that are leading (they predict the outcome before it arrives), paired (they include both the effect you want and the side effect you want to prevent), and simple (you can actually collect them without heroic effort).
3. At what cadence will I check the signal? A measurement you collect but never review is not a feedback loop. It is a database. Set a recurring moment — daily, weekly, per-sprint — where you look at the data and ask: is this working? The cadence should match the speed of the process. Daily for a writing habit. Weekly for a fitness program. Per-sprint for a software team. Quarterly for a strategic initiative.
4. What will I change based on what I find? This is the most neglected step. Measurement without a decision rule is observation without feedback. Define in advance: if the metric drops below X, I will do Y. If the metric stays flat for N cycles, I will revisit the approach. If the metric spikes, I will investigate before celebrating. The measurement only becomes a feedback loop when it connects to action.
The cost of not measuring
Most people do not fail because they measure the wrong things. They fail because they measure nothing. They run processes — meetings, habits, projects, relationships — for months or years without any signal telling them whether those processes are working.
The cost is invisible, which is why it persists. You cannot see the feedback loop that does not exist. You cannot miss the correction that never happened. You simply continue on the same trajectory, mistaking consistency for effectiveness and effort for progress.
The first feedback loop in any system is the simplest: start measuring one thing. One metric for one process. Track it for long enough to see a pattern. Then use the pattern to make one adjustment. You have just built what did not exist before: a system that can learn from its own output.
Everything in the next nineteen lessons of this phase — leading indicators, emotional feedback loops, habit loops, multi-loop systems, feedback hygiene — depends on this foundation. You cannot build a feedback loop around what you are not measuring. Start measuring.
Sources and further reading:
- Lord Kelvin, "Electrical Units of Measurement," lecture to the Institution of Civil Engineers, May 3, 1883.
- V.F. Ridgway, "Dysfunctional Consequences of Performance Measurements," Administrative Science Quarterly 1 (1956): 240-247.
- W. Edwards Deming, Out of the Crisis (MIT Press, 1982).
- Andrew S. Grove, High Output Management (Random House, 1983).
- Charles Goodhart, "Problems of Monetary Management: The U.K. Experience," Papers in Monetary Economics (Reserve Bank of Australia, 1975).
- John Doerr, Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs (Portfolio/Penguin, 2018).
- Gary Wolf, "The Quantified Self," TED talk, 2010; Kevin Kelly and Gary Wolf, Quantified Self (quantifiedself.com), founded 2007.