Two true statements that cannot both be true — until you check the scope
In 1973, the University of California at Berkeley appeared to have a serious gender discrimination problem. The graduate division admitted roughly 44% of male applicants and only 35% of female applicants. The numbers were damning. School officials asked statistician Peter Bickel to investigate before a lawsuit materialized.
Bickel, along with colleagues Hammel and O'Connell, published their analysis in Science in 1975. What they found was the opposite of what the aggregate data suggested. When they examined each department individually, most departments showed a slight bias in favor of women. The aggregate discrimination vanished — and in some cases reversed — once you asked the right scoping question: which departments were these applicants applying to?
Women disproportionately applied to departments with low overall admission rates (humanities, social sciences), while men disproportionately applied to departments with high overall admission rates (engineering, physical sciences). The "bias" was not in the admissions decisions. It was in the aggregation — in collapsing different scopes into a single number and then treating that number as if it described a single phenomenon.
This is Simpson's paradox: a trend that appears in aggregated data reverses or disappears when the data is disaggregated by a relevant variable. And it is the statistical expression of a much deeper epistemic principle — what seems contradictory is often two statements that are each true within their own scope.
Scope is the hidden variable in most disagreements
A scope is the boundary condition that determines where a claim holds. Every meaningful statement has one, whether the speaker makes it explicit or not. "Exercise is good for you" has an implicit scope: moderate habitual exercise, for generally healthy populations, measured over months or years. "Exercise increases cardiac risk" also has a scope: vigorous exertion during a single session, for individuals with pre-existing cardiovascular conditions, measured in the minutes during and after the event.
Both are well-supported by research. Both are true. They are not contradictions. They are claims operating within different scopes — different populations, different intensities, different timeframes. The appearance of contradiction is an artifact of stripping the scope from each statement and placing the naked claims side by side.
This pattern is everywhere. "Move fast and break things" contradicts "measure twice, cut once" — until you scope them. The first applies to early-stage product discovery where the cost of inaction exceeds the cost of mistakes. The second applies to infrastructure decisions where errors are expensive or irreversible. A startup founder and a bridge engineer are not disagreeing. They are solving problems with fundamentally different error costs, and their advice reflects those different scopes.
The philosopher Paul Grice formalized part of this in his work on conversational pragmatics. His Cooperative Principle (1975) established that speakers routinely leave context implicit, relying on listeners to infer the relevant scope from shared background knowledge. When Grice described his maxims — be informative, be truthful, be relevant, be clear — he was documenting the mechanics of how humans encode and decode scope without saying it out loud. Most everyday communication works this way. The trouble starts when implicit scope crosses contexts: when advice meant for one audience, one situation, or one timeframe gets applied to another without anyone noticing the scope has shifted.
Simpson's paradox: when the data itself contradicts across scopes
Simpson's paradox is not a curiosity from statistics textbooks. It is the formal demonstration that aggregation can manufacture contradictions that do not exist in the underlying data.
The Berkeley case is the most cited example, but the pattern replicates across domains. In a well-known study of kidney stone treatments, Treatment A showed higher success rates than Treatment B for both small stones and large stones — but Treatment B showed a higher success rate when the data was combined. The explanation: Treatment B was disproportionately used on small stones (which have higher success rates regardless of treatment), inflating its aggregate numbers. Disaggregate by stone size, and the "contradiction" disappears.
In every case of Simpson's paradox, the resolution is the same: identify the lurking variable that defines the scope. The aggregate claim ("Treatment B is better overall") and the disaggregated claim ("Treatment A is better for each subgroup") are not contradictions. They are answers to different questions, operating at different scopes. The paradox exists only when you fail to specify which scope you mean.
Judea Pearl, who has done more than anyone to formalize causal reasoning in statistics, makes the point directly: the paradox disappears once you make the causal structure explicit. The problem is not in the data. It is in analyzing the data without stating the scope of the causal question you are asking. Strip the scope, and the data appears to contradict itself. Restore the scope, and coherence returns.
The ecological fallacy: scope confusion between groups and individuals
In 1950, sociologist William S. Robinson published an analysis that shattered a common statistical practice. Using 1930 U.S. Census data, he examined the relationship between race and illiteracy. At the state level, the correlation between percent Black population and percent illiterate population was 0.77 — a strong positive relationship. But at the individual level, the correlation between being Black and being illiterate was only 0.20.
Even more striking: the state-level correlation between being foreign-born and being illiterate was negative (r = -0.53), while the individual-level correlation was positive (r = 0.12). The ecological data did not just exaggerate the individual relationship. It reversed it.
Robinson demonstrated that ecological correlations (relationships measured at the group level) cannot be used as substitutes for individual correlations. The term "ecological fallacy," coined by Selvin in 1958 to describe this error, names one of the most common scope confusions in reasoning: assuming that what is true of a group is true of the individuals within it.
This is not an abstract problem. It shapes how people reason about politics (wealthier states vote Democratic, but within those states, wealthier individuals tend to vote Republican), health (countries with higher fat consumption have higher breast cancer rates, but within those countries, individual fat consumption does not reliably predict individual breast cancer), and organizational behavior (teams with more experienced members perform better on average, but adding experience to a specific team may not help if the bottleneck is coordination, not knowledge).
The pattern in every case: a true statement at one scope (the group) becomes a false statement at another scope (the individual), or vice versa. The claims are not contradictions. They are measurements at different levels of analysis, and confusing the two is a scope error.
Why "best practices" contradict each other across domains
The most practical version of scope confusion shows up in expert advice. Every field has its canonical wisdom, and canonical wisdom from different fields frequently contradicts.
Lean manufacturing says eliminate waste and reduce inventory. Disaster preparedness says stockpile inventory and build redundancy. Financial advisors say diversify your portfolio. Venture capitalists say concentrate your bets. Therapists say process your feelings. Stoics say choose your response to feelings, not the other way around.
These are not disagreements between smart people who should know better. They are recommendations calibrated to different scopes — different risk profiles, different time horizons, different populations, different optimization targets. Lean manufacturing optimizes for efficiency in stable environments with reliable supply chains. Disaster preparedness optimizes for resilience in unstable environments where supply chains fail. The contradiction exists only if you strip the scope and treat both as universal advice.
The problem of domain-specific transfer — why "best practices" from one field fail when imported to another — is fundamentally a scope problem. The practice was best within its original scope. The failure happened because the scope changed and nobody updated the boundary conditions. Researchers studying the translation of evidence into practice have identified this as a core challenge: findings from specific contexts get generalized to universal principles, and then practitioners try to apply those principles in contexts the original research never studied. Each step across that chain is a scope shift that nobody makes explicit.
Kahneman's framing: scope as the width of your lens
Daniel Kahneman, in Thinking, Fast and Slow, describes humans as "narrow framers" — people who evaluate each decision in isolation rather than as part of a broader set of decisions. Narrow framing is a scope restriction you apply to yourself without realizing it.
His classic example: you are offered a gamble with a 50% chance to win $200 and a 50% chance to lose $100. Most people reject this single gamble. But if you were offered the same gamble 100 times, the expected value is +$5,000 with negligible risk of net loss. The rational broad-framed answer is obvious: take the repeated gamble. The narrow-framed answer — evaluating each instance as if it were the only one — leads you to reject a massively positive-value opportunity.
This is scope disambiguation applied to your own decision-making. The narrow framer asks, "Should I take this gamble?" The broad framer asks, "What is my policy for gambles with this risk/reward structure?" Same situation. Different scope. Different — and contradictory — answers. Kahneman's recommendation is explicit: "See the decision as a member of a class of decisions that you'll probably have to take." In other words, widen the scope before evaluating.
The implication extends beyond finance. Narrow framing explains why a single bad customer interaction leads to an emotional policy change, why one failed hire triggers an overhaul of the entire recruiting process, why a single day of poor productivity convinces you that your entire system is broken. In each case, you are drawing a conclusion at a scope (one instance) that does not support the inference you are making (a general pattern). The contradiction between "my system works" and "today was terrible" is a scope mismatch, not a genuine conflict.
Scope in language: the ambiguity you parse without noticing
Linguists have studied scope disambiguation as a formal property of natural language for decades. The sentence "Every student read a book" has two readings depending on the scope of the quantifiers. Wide scope for "a book": there is one specific book that every student read. Wide scope for "every student": each student read some book, potentially different ones. Same words, two meanings, determined entirely by scope.
English speakers resolve these ambiguities unconsciously, using syntactic position, world knowledge, and conversational context. Research on quantifier scope processing (Tunstall, 1998; Anderson, 2004) suggests that listeners default to the surface-scope reading (the order the quantifiers appear) and only compute the inverse-scope reading when context forces it. This means your default interpretation of a statement assigns scope based on word order — not based on what the speaker actually meant. Scope errors in everyday reasoning follow the same pattern: you interpret a claim using the default scope rather than the scope the speaker intended.
This has become an active area of research in AI and large language models. A 2024 paper in Transactions of the Association for Computational Linguistics found that LLMs struggle with scope ambiguities in systematic ways — they tend to default to surface-scope interpretations and fail to reliably compute inverse-scope readings. If machines trained on billions of sentences have trouble with scope, it is not surprising that humans, reasoning in real time with limited working memory, make scope errors constantly.
The disambiguation protocol
Scope disambiguation is not a talent. It is a procedure you can apply to any apparent contradiction. When two claims conflict, run three checks before concluding they are genuinely irreconcilable:
1. Who is this about? Check whether the two claims apply to the same population. "Debt is dangerous" and "debt is a growth tool" are both true — the first for individuals without cash reserves, the second for businesses with reliable revenue streams. Different populations, different scopes.
2. Under what conditions? Check whether the two claims assume the same environment. "Centralized decision-making is more efficient" and "distributed decision-making produces better outcomes" are both well-supported — the first under conditions of low complexity and high information symmetry, the second under conditions of high complexity and distributed expertise. Different conditions, different scopes.
3. Over what timeframe? Check whether the two claims measure the same interval. "This restructuring is hurting performance" and "this restructuring will improve performance" are frequently both true — the first over the next quarter, the second over the next three years. Different timeframes, different scopes.
If any of these checks reveals a scope difference, you do not have a contradiction. You have two claims that were true all along, separated by a boundary condition that neither speaker made explicit. Label the scopes. The "contradiction" resolves.
The limit: not every contradiction is a scope problem
There is a failure mode worth naming. Once you learn scope disambiguation, it is tempting to dissolve every tension you encounter by finding clever scoping that makes both sides true. This can become an avoidance strategy — a way to never take a position, never choose between genuinely competing values, never confront a real trade-off.
Some contradictions are real. "I want maximum career growth" and "I want to be home every evening by 5:30" may not be a scope problem. They may be a genuine conflict between two things you value that cannot both be fully satisfied in the same life. L-0366 established that some tensions must be managed rather than resolved. Scope disambiguation is the first tool to reach for — but not the only one, and not always the right one.
The discipline is knowing the difference. When scope disambiguation reveals that two claims operate in different contexts, you have gained clarity without losing either claim. When scope disambiguation requires implausible contortions to make both sides simultaneously true, you are likely facing a real trade-off that needs a different resolution strategy — one that L-0368 and the lessons that follow will address.
From scope to level
This lesson addressed the most common source of false contradictions: claims that are each true in different contexts, for different populations, or over different timeframes. But there is a related and subtler form of scope confusion — contradictions that arise because two claims operate at different levels of abstraction. What is true of the system may not be true of the component. What is true of the average may not be true of the distribution. What is true of the model may not be true of the territory.
L-0368, Level disambiguation, takes this further. Where this lesson asked "are these claims about the same scope?", the next asks "are these claims about the same level?" The distinction matters because level confusions are harder to detect — they feel like the same scope but operate on different planes of analysis. Scope disambiguation is the first move. Level disambiguation is the second.