Two channels, two error profiles
You are always receiving feedback from two fundamentally different sources. The first is reality itself — the direct, measurable consequences of your actions. Your code compiles or it doesn't. Your revenue goes up or it doesn't. Your body recovers from the workout or it doesn't. Reality feedback is the scoreboard. It doesn't care about your intentions, your effort, or your feelings. It reports what happened.
The second source is other people — colleagues, friends, mentors, managers, strangers on the internet. They tell you your presentation was great. They say your startup idea has legs. They suggest you're overreacting or underreacting. People feedback is interpretive. It passes through the filter of another person's knowledge, biases, social goals, and emotional state before it reaches you.
Both channels carry genuine signal. But they fail in completely different ways, and most people treat them as if they're interchangeable. They are not. Understanding the distinct error profile of each is one of the most consequential epistemic upgrades you can make.
Reality feedback: high validity, low empathy
Reality feedback is what Nassim Taleb calls the domain of "skin in the game." In Skin in the Game (2018), Taleb argues that you will never fully convince someone they are wrong through argument alone — only reality can do that. The entrepreneur who ignores market signals doesn't get persuaded out of a bad business model; the business simply fails. The feedback is delivered not through words but through consequences.
This is what makes reality feedback so valuable: it has no social motive. A conversion rate doesn't soften the truth to protect your feelings. A blood test doesn't worry about damaging your self-esteem. A bank balance doesn't hedge its message because it wants to maintain a good working relationship with you. Reality feedback is what the measurement literature calls high-validity information — it corresponds to what actually happened, not to what someone thinks happened or wishes had happened.
The Kluger and DeNisi meta-analysis (1996), one of the most comprehensive studies of feedback ever conducted, reviewed 607 effect sizes across 23,663 observations and found that feedback interventions improve performance on average. But here is the critical finding: over one-third of feedback interventions actually decreased performance. The interventions most likely to harm performance were those that directed attention toward the self rather than toward the task. In other words, feedback that makes you think about who you are hurts performance, while feedback that makes you think about what to do differently helps it. Reality feedback is almost always task-focused. It tells you what happened and implies what to adjust. It rarely triggers identity-level rumination unless you bring that framing yourself.
But reality feedback has its own limitations. It is often delayed — you don't know if a career decision was right for years. It can be noisy — a single failed experiment doesn't mean the hypothesis is wrong. And it is silent on dimensions that matter enormously: whether your team trusts you, whether your communication style alienates collaborators, whether your success is creating resentment that will eventually undermine you. Reality tells you what happened. It rarely tells you why other people responded the way they did.
People feedback: rich context, systematic distortion
When another person gives you feedback, they are doing something reality cannot: they are interpreting. They can tell you not just that your presentation fell flat, but that it fell flat because you talked past the audience's concerns. They can tell you not just that the team is underperforming, but that morale dropped after you canceled the offsite. Interpretation is the unique value of people feedback — it provides why and how, not just what.
But interpretation comes wrapped in distortion, and the distortions are systematic, not random.
Social desirability bias is the most pervasive. Research on social desirability, extensively documented in the European Journal of Social Psychology (Nederhof, 1985) and subsequent studies, shows that people systematically overreport positive behavior and underreport negative behavior in the presence of others. In feedback contexts, this means people are more likely to tell you what sounds good than what is true. A manager may rate an aggressive, results-oriented employee higher on interpersonal skills than warranted because the employee's task performance creates a halo effect. Social desirability doesn't make people liars — it makes them unconsciously diplomatic.
The MUM effect compounds this. Research dating back to the 1970s and formalized by Dibble and Levine (2010) demonstrates that people are systematically reluctant to share bad news relative to good news. The term "MUM" — keeping Mum about Undesirable Messages — describes a robust finding: messengers hesitate to deliver negative information because they fear being blamed (the "shoot the messenger" response), because they empathize with the recipient's distress, or because they want to preserve the relationship. This means the people around you are filtering out exactly the feedback you most need to hear. The worse the news, the less likely it is to reach you unmodified.
Noise, as Daniel Kahneman, Olivier Sibony, and Cass Sunstein describe in Noise: A Flaw in Human Judgment (2021), adds another layer. Even when people try to give honest, calibrated feedback, their judgments vary wildly from each other. Kahneman's research found that insurance underwriters independently evaluating the same case produced premiums that varied by 55% — five times more than executives expected. This isn't bias (a systematic lean in one direction). It's noise: random variability in human judgment. When five people give you feedback on the same piece of work, much of the disagreement between them isn't signal — it's noise in their individual judgment processes.
Put these three together — social desirability, the MUM effect, and noise — and you get a picture of people feedback that is rich in contextual insight but systematically distorted toward the positive, filtered of the most critical information, and inconsistent across evaluators.
The feedback typology you actually need
Most people categorize feedback as "positive" or "negative." That framing is almost useless because it conflates the source with the valence. A more productive typology distinguishes feedback along two axes: source (reality vs. people) and latency (immediate vs. delayed).
| | Immediate | Delayed | | ----------- | --------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | | Reality | Code fails to compile. Customer doesn't buy. Scale shows weight gain. | Revenue trends over quarters. Health markers over years. Career trajectory over decades. | | People | Facial expression during your pitch. Slack reaction to your message. | Performance review. Reputation change. Relationship erosion over time. |
Each quadrant has different strengths. Immediate reality feedback is the tightest learning loop — you act, you see the result, you adjust. This is why, as L-0463 established, tight feedback loops accelerate learning. Delayed reality feedback is the most trustworthy signal for big decisions, but it requires patience and the ability to distinguish signal from noise across long timeframes.
Immediate people feedback — a wince, a laugh, a furrowed brow — is surprisingly high-bandwidth because it is involuntary. Micro-expressions and body language are harder to fake than verbal feedback, which is why experienced negotiators watch faces, not words. Delayed people feedback — how your reputation shifts, whether people seek you out or avoid you over months — integrates hundreds of small interactions into a meaningful pattern. It is slow but highly informative.
The key insight is that no single quadrant is sufficient. You need all four, and you need to weight them differently depending on what you're trying to learn.
Why people over-index on social feedback
If reality feedback is more valid, why do most people rely more heavily on people feedback? Three reasons.
First, people feedback is emotionally salient. A colleague telling you "that was brilliant" or "that missed the mark" triggers a dopamine or cortisol response that a metric on a dashboard does not. You feel people feedback in a way you don't feel reality feedback, so it occupies more of your attention.
Second, people feedback is narratively structured. When your manager says "you need to work on executive presence," that is a story you can react to. When your quarterly numbers decline by 3%, that is a data point that requires you to construct the story yourself. Humans are narrative processors. Pre-packaged narratives are cognitively easier to consume than raw data.
Third, people feedback is socially reinforced. Seeking and responding to the opinions of others is how social groups maintain cohesion. Ignoring what people think carries social cost. Ignoring what reality shows carries only practical cost — and practical costs are often delayed, while social costs are immediate.
The result is that most people live in a feedback environment dominated by social signal, supplemented occasionally by reality signal, when the optimal ratio is closer to the reverse.
Building your own feedback correction
Ray Dalio's solution at Bridgewater Associates was to create systems that force reality and people feedback into direct confrontation. His "idea meritocracy" weights opinions by the track record of the person offering them — essentially requiring that people feedback be validated against the reality feedback of that person's historical accuracy. Your opinion on trading strategy carries more weight if your past trades were profitable. This is Dalio's version of the same principle: people feedback is valuable, but only when it is calibrated against results.
You don't need to run a hedge fund to apply this. Here is a practical framework:
For decisions with measurable outcomes, lead with reality feedback. Define what you will measure before you act. Track the result. Consult people feedback to interpret the result, but don't let interpretation override measurement. If your product's retention rate is declining but everyone on the team "feels good about the direction," the retention rate wins.
For decisions about relationships and perception, lead with people feedback — but triangulate. Don't rely on one person's opinion. Seek feedback from at least three people with different relationships to you and different incentive structures. Look for the pattern across their responses, not the content of any single response. The signal is in the convergence.
For decisions with long time horizons, build leading indicators (as covered in L-0468) that give you reality feedback faster than the ultimate outcome would. If you're building a career, don't wait ten years to find out if your strategy worked. Track intermediate signals: are you learning new skills each quarter? Is your network growing? Are the projects you're offered increasing in scope?
The AI parallel: automated metrics versus human evaluation
This distinction between reality feedback and people feedback is not just a human problem. It is one of the central design challenges in artificial intelligence.
When training large language models, engineers face the same two-channel problem. Automated metrics — perplexity, BLEU scores, ROUGE scores — are reality feedback. They measure what the model actually produced against an objective benchmark. But these metrics, like all reality feedback, are narrow. A model can optimize BLEU scores while producing text that humans find unhelpful, awkward, or harmful. The metric captures what happened but misses what mattered.
This is why the field developed Reinforcement Learning from Human Feedback (RLHF). Human evaluators provide people feedback — ranking model outputs by quality, helpfulness, and safety. This human judgment, like all people feedback, is richer in context but subject to the same distortions: social desirability (evaluators may rate outputs they personally agree with as "better"), noise (different evaluators rate the same output differently), and bias (cultural, linguistic, and ideological).
The solution the AI field converged on is not to choose one channel over the other. It is to combine them deliberately: use automated metrics as a reality-feedback baseline, use human evaluation to capture dimensions that metrics miss, and use the tension between the two to improve both. Anthropic's Constitutional AI takes this further by having an AI model evaluate outputs against written principles — essentially attempting to reduce the noise in human evaluation while preserving its contextual richness.
This is the same architecture you need for your own feedback system. Reality feedback alone makes you effective but blind to social dynamics. People feedback alone makes you socially attuned but vulnerable to systematic distortion. The combination — with clear awareness of which channel you're drawing from and what its error profile is — makes you genuinely adaptive.
Convergence and divergence as signal
The most actionable heuristic from this lesson is simple: pay the most attention where reality feedback and people feedback diverge.
When both channels agree, you have high confidence. Your sales numbers are up and your team says the new strategy is working. Your fitness metrics are improving and your trainer says your form looks good. Convergence is reassuring and usually trustworthy.
When the channels diverge, something important is happening. Either reality is telling you something people won't (the MUM effect is filtering bad news), or people are seeing something reality hasn't caught up to yet (they notice a process problem before it shows up in the metrics). Divergence is uncomfortable but it is where the highest-value learning lives.
The practice is to stop asking "is this positive or negative feedback?" and start asking "which channel is this coming from, and what does the other channel say?" That question alone will change how you process every piece of feedback you receive.
Reality doesn't have an agenda. People do — not because they're dishonest, but because they're human. Both channels are carrying signal. Your job is to know which distortions belong to which channel, and to never mistake one for the other.