You are steering by the wake
Most people measure outcomes. They check the scale after a month of eating. They review revenue at the end of the quarter. They look at employee turnover numbers after the team has already fractured. Then they react — too late, with too little context, to problems that were visible weeks or months earlier if they had been looking at the right signals.
This is what it means to rely on lagging indicators. Lagging indicators tell you what already happened. They are the rearview mirror. They confirm whether your past decisions produced the results you wanted. What they cannot do is help you adjust in real time, because by the time a lagging indicator moves, the causal chain that produced it has already run its course.
Leading indicators are different. They measure the inputs, behaviors, and conditions that predict outcomes before those outcomes materialize. They are the windshield — showing you what is coming so you can steer.
The distinction between leading and lagging indicators is the difference between a feedback loop that teaches you something useful and a feedback loop that only delivers postmortems.
The anatomy of leading versus lagging
Andy Grove, in High Output Management (1983), described leading indicators as a way to "cut holes in the black box" of an organization. His argument was direct: most managers stare at output metrics — units shipped, revenue booked, customers acquired — and by the time those numbers arrive, the work that produced them happened weeks or months ago. Grove advocated for a small set of measurable leading indicators that could be reviewed daily, giving managers time to intervene before problems became outcomes.
The structural difference is this:
Lagging indicators are outputs. Revenue, customer churn, body weight, test scores, employee attrition. They are important — they tell you whether your system is producing the results you want. But they are not actionable in real time because they measure what already happened.
Leading indicators are inputs or early signals. Pipeline velocity, customer activation rate, daily caloric intake, hours of deliberate practice, one-on-one meeting frequency. They are actionable because they measure things you can still influence. And they are predictive because changes in leading indicators precede changes in lagging ones.
Here is the key insight: you cannot directly control a lagging indicator. You cannot will revenue to increase or churn to decrease. You can only control the leading inputs that drive those outcomes. If you are not measuring the leading inputs, you are flying blind until the lagging indicator arrives — and then all you can do is autopsy what went wrong.
Kaplan and Norton formalized this in their Balanced Scorecard framework (1992), published in the Harvard Business Review. They argued that financial metrics alone — the ultimate lagging indicators — were insufficient for managing an organization. Their framework added three forward-looking perspectives: customer satisfaction, internal process efficiency, and organizational learning capacity. Each perspective contained leading indicators that predicted future financial performance. The insight was that if you only measured financial results, you would always be reacting to the past. But if you measured the operational drivers of those results, you could intervene before the financials deteriorated.
Proxy metrics: when you cannot measure the real thing directly
Sometimes the outcome you care about is too slow, too expensive, or too ambiguous to measure directly. In those cases, you need a proxy metric — a measurable signal that correlates strongly with the outcome you actually want.
Sean Ellis, who led growth at Dropbox, LogMeIn, and Eventbrite, discovered one of the most powerful proxy metrics in startup history. He wanted to know whether a product had achieved product-market fit — a notoriously difficult thing to measure directly. After benchmarking nearly a hundred startups, he found that a single survey question predicted it: "How would you feel if you could no longer use this product?" If 40 percent or more of users answered "very disappointed," the company consistently achieved sustainable growth. Below 40 percent, growth stalled regardless of marketing spend.
That 40 percent threshold is a leading indicator. Revenue, the lagging indicator, would not show the difference for 6 to 12 months. But the Ellis score showed it immediately — and it was actionable. Teams that fell below 40 percent could identify which user segments were most disappointed, study what those users valued, and reshape the product before the revenue numbers caught up.
Proxy metrics work because they exploit the causal structure of the system. If you understand what drives an outcome, you can measure the driver instead of waiting for the outcome. The driver moves first, it moves faster, and it is usually something you can influence.
The criteria for a good proxy metric are straightforward. It must correlate strongly with the outcome you care about. It must be measurable sooner than the outcome itself. It must be actionable — something your behavior can influence. And it must be stable enough that changes in the proxy reflect real changes in the underlying system, not noise.
Early warning systems: measuring before the crisis
The concept of leading indicators extends naturally into early warning systems — structured approaches to detecting problems before they become visible in output metrics.
In organizational risk management, Key Risk Indicators (KRIs) serve this function. A KRI is a leading metric that signals increasing exposure to a specific risk. The number of users with super-admin access beyond defined norms, for example, is a KRI for security breaches. The metric does not tell you that a breach has occurred — it tells you that the conditions for a breach are ripening. That is the difference between a lagging indicator (breach occurred) and a leading one (breach probability is increasing).
Donella Meadows, in Thinking in Systems (2008), identified the lengths of delays relative to the rate of system change as one of twelve leverage points in complex systems. When delays are long — when the gap between action and observable consequence is large — systems become hard to manage. Leading indicators compress those delays. They give you a shorter feedback loop by measuring upstream signals rather than waiting for downstream consequences.
This is why Grove insisted that indicators must be reviewed frequently. A leading indicator reviewed quarterly is barely better than a lagging indicator reviewed quarterly. The value of leading indicators comes from their ability to shorten your feedback cycle — but only if you actually look at them often enough to act on what they show you.
The personal application: your own leading indicators
This is not just an organizational concept. Your personal feedback systems suffer from the same lagging-indicator problem.
Consider health. Most people use body weight as their primary metric. But weight is a lagging indicator — it reflects the cumulative effect of nutrition, sleep, stress, and activity over weeks. By the time the scale moves, you have already been off track for a while. Leading indicators for body composition include daily caloric intake, protein consumption, training volume, and sleep duration. These move immediately in response to your behavior, and changes in these metrics predict where the scale will be in four to six weeks.
Consider career development. Most people use promotions, raises, or job offers as their metrics. But these are lagging indicators — they reflect months or years of accumulated skill, reputation, and positioning. Leading indicators for career growth include the number of stretch assignments you take on, the frequency of feedback you solicit, the depth of your professional network expansion, and the number of new skills you practice deliberately each quarter.
Consider learning. Most people use test scores or certification completion as their metrics. But these arrive at the end of the learning process. Leading indicators for learning effectiveness include retrieval practice frequency, the ability to explain a concept without notes, the number of connections you can draw between the new concept and existing knowledge, and whether you can apply the concept to a novel problem.
In every domain, the pattern is the same: the metrics that most people track are lagging indicators that arrive too late to be actionable. The metrics that actually accelerate improvement are leading indicators that signal what is coming before it arrives.
The AI parallel: validation loss and canary metrics
Machine learning training provides one of the cleanest illustrations of leading versus lagging indicators in any technical domain.
When you train a neural network, you care about generalization — how well the model performs on data it has never seen. But you cannot measure generalization directly during training, because the whole point is that the test data is unseen. So you use a proxy: validation loss. You hold out a subset of data, measure the model's loss on that subset at each training epoch, and use that measurement as a leading indicator of generalization performance.
The pattern during training is predictable. Training loss decreases continuously as the model memorizes the training data. Validation loss decreases initially, then begins to increase — the inflection point where the model starts overfitting. Validation loss is a leading indicator of overfitting. If you wait for the lagging indicator — poor performance on the test set after training completes — you have already wasted compute and time.
Early stopping, a standard regularization technique, operationalizes this. You define a patience parameter — the number of epochs you will tolerate without improvement in validation loss — and stop training when that threshold is crossed. The entire technique depends on treating validation loss as a leading indicator that predicts when continued training will harm rather than help.
Google's Site Reliability Engineering practice applies the same principle to production systems through canary deployments. When releasing new code, a canary deployment routes a small percentage of traffic to the new version while the majority continues on the stable version. Engineers then monitor leading metrics — latency, error rate, and resource saturation — comparing the canary population to the control population. If the canary's leading indicators degrade, the deployment rolls back before the lagging indicator (user complaints, revenue impact) ever materializes.
The Google SRE Workbook describes canary analysis as a two-step process: first, assess the canary based on a selected list of leading metrics; second, decide whether to promote or roll back. The critical design choice is which metrics to monitor. Error rate is a leading indicator of user-visible failures. Latency is a leading indicator of degraded experience. Resource saturation is a leading indicator of capacity exhaustion. Each metric gives you time to act before the consequence reaches the user.
This is the pattern across all domains: leading indicators buy you time, and time is what makes feedback actionable.
Building your leading indicator system
Identifying leading indicators requires understanding the causal structure of the system you are trying to manage. Ask three questions:
What outcome do I care about? This is your lagging indicator. Be specific. Not "health" but "body fat percentage." Not "career success" but "promotion to senior engineer within 18 months."
What inputs and behaviors drive that outcome? Map the causal chain backward from the outcome. What actions, done consistently, produce the result you want? These are your candidate leading indicators.
Which of those inputs can I measure frequently and act on? Not all causes are measurable. Not all measurable causes are actionable. The best leading indicators are both — you can observe them quickly and you can change them through your behavior.
Once you have identified your leading indicators, the implementation is straightforward:
-
Measure leading indicators at higher frequency than lagging ones. Daily or weekly for leading indicators, monthly or quarterly for lagging confirmation. This mirrors Grove's prescription of daily indicator review.
-
Set thresholds that trigger action. A leading indicator without a response protocol is just data. Define what "off track" looks like and what you will do when the indicator crosses that line.
-
Validate the correlation over time. Not every leading indicator actually predicts the outcome it is supposed to. Review periodically: when the leading indicator moved, did the lagging indicator follow? If not, you have a bad proxy and need to find a better one.
-
Resist the temptation to game the leading indicator. Goodhart's Law — "when a measure becomes a target, it ceases to be a good measure" — applies especially to leading indicators because they are actionable. If you optimize for the proxy without understanding why it predicts the outcome, you will decouple the proxy from the outcome and lose your signal.
Why this matters for your epistemic infrastructure
Every feedback loop in your cognitive infrastructure has an indicator problem. You are either measuring things that tell you what already happened, or measuring things that tell you what is about to happen. The first gives you history. The second gives you agency.
The lesson from L-0467 was that measurement is the prerequisite for feedback. This lesson adds the crucial refinement: what you measure determines how fast your feedback loop runs. Measure lagging indicators and your loop runs on a delay — you learn slowly, correct slowly, and drift further before you notice. Measure leading indicators and your loop tightens — you learn sooner, correct sooner, and stay closer to your intended trajectory.
The next lesson, on feedback from reality versus feedback from people, will explore a different axis of the same problem: not when you get feedback, but what kind of feedback you are getting. Leading indicators from direct observation of reality operate differently than leading indicators derived from other people's reactions. Both matter. But they fail in different ways.
For now, the practice is concrete: find the lagging indicators you are currently staring at, identify the leading indicators that predict them, and shift your attention upstream. The feedback you need most is the feedback that arrives before the outcome is already decided.