Two people can both be right about opposite things
"Free will exists" and "every event has a prior cause" have fueled centuries of philosophical argument. "Competition drives progress" and "cooperation drives progress" generate boardroom debates that never resolve. "Individuals are responsible for their outcomes" and "systems determine individual outcomes" fracture entire political movements.
These feel like head-on collisions between incompatible claims. And sometimes they are. But a surprising number of apparent contradictions — in philosophy, in organizations, in your own belief system — dissolve once you ask a single question: are these claims operating at the same level of abstraction?
The previous lesson, L-0367, introduced scope disambiguation — the idea that what seems contradictory is often two statements true in different contexts. Level disambiguation goes one layer deeper. It's not just that the statements apply in different situations. It's that they describe different layers of the same system, and what's true at one layer may be false, meaningless, or irrelevant at another.
This is one of the most powerful contradiction-resolution tools you'll encounter. And it has a formal history going back over a century.
Russell's type theory: the original level error
In 1901, Bertrand Russell discovered a paradox that nearly broke the foundations of mathematics. Consider the set of all sets that do not contain themselves. Does this set contain itself? If it does, then by definition it shouldn't. If it doesn't, then by definition it should. The paradox is airtight — within its own framing.
Russell's solution, developed formally in his 1908 paper "Mathematical Logic as Based on the Theory of Types," was to recognize that the paradox arises from a level confusion. Statements about individuals belong to one type. Statements about sets of individuals belong to a higher type. Statements about sets of sets belong to a higher type still. The paradox occurs because the formulation tries to apply a statement to its own level — it asks a set to evaluate its own membership, which is an operation that crosses type boundaries.
The vicious circle principle that Russell articulated states that no propositional function can be defined in terms that include the function's own scope. In plain language: you can't use a statement to evaluate itself without generating nonsense.
This matters far beyond mathematics. Russell's insight was that a huge class of paradoxes — including many that masquerade as genuine contradictions — are actually type errors. They mix levels. And the fix isn't to choose a side. The fix is to recognize that the claims live at different levels and stop forcing them into the same frame.
Korzybski's structural differential: abstraction has layers
Alfred Korzybski, working in the 1930s, built an entire discipline — general semantics — around the insight that human cognition operates through multiple levels of abstraction, and that confusing those levels is the source of most semantic confusion.
His famous dictum, "the map is not the territory," is usually cited as a reminder that our representations of reality differ from reality itself. But Korzybski meant something more structural. He created a physical teaching device called the structural differential to demonstrate that abstraction happens in stages, and that each stage leaves things out.
At the bottom is the event level — the sub-microscopic, dynamic reality of what's actually happening. Above that is the object level — what your senses can perceive. Above that is the descriptive level — what you can put into words. Above that are higher-order abstractions — inferences, generalizations, theories. Each level is an abstraction of the level below it, and each level necessarily omits information that was present at the prior level.
The critical point: a statement that is accurate at one level of abstraction may be misleading or false at another. "Water is wet" is true at the object level (your sensory experience). At the molecular level, individual H2O molecules have no property called "wetness" — wetness is a macro-level phenomenon that emerges from the interaction of many molecules with a surface. Neither statement is wrong. They operate at different levels of the abstraction hierarchy.
When two people argue about whether something "is really true," they are often arguing across levels without realizing it. One person is describing their experience (object level), the other is describing the mechanism (molecular level), and the disagreement is structural, not factual.
Bateson's learning levels: where contradiction drives transformation
Gregory Bateson formalized the level distinction in a different domain — learning itself. In "The Logical Categories of Learning and Communication" (1964), Bateson proposed a hierarchy that has become foundational in systems thinking.
Learning 0 is zero-order change: a fixed response to a fixed stimulus. No learning occurs. The system does the same thing every time.
Learning I is first-order change: you adjust your response based on feedback. You learn that pressing a button produces food, or that a specific email subject line gets higher open rates. This is trial-and-error correction within a fixed context.
Learning II is second-order change: you learn the pattern of the context itself. You don't just learn that this button gives food — you learn that you're in a "button gives food" type of situation, and you bring that contextual understanding to new situations. Bateson called this "deutero-learning." It's learning to recognize what kind of game you're playing.
Learning III is third-order change: you examine and revise the frameworks that govern Learning II. You don't just learn contexts — you question whether your way of categorizing contexts is itself correct. Bateson argued that Learning III is rare, disorienting, and typically triggered by contradictions that arise within Learning II — when your framework for interpreting contexts produces conflicting interpretations.
The level distinction matters here because a statement about behavior (Learning I) and a statement about the framework governing that behavior (Learning II) can genuinely contradict each other — and both be correct at their respective levels. "I always keep my promises" (Learning I — a behavior pattern) can coexist with "I unconsciously select commitments I know I can keep, avoiding risky ones" (Learning II — a contextual pattern that governs which promises get made). The behavior is consistent. The framework producing the behavior tells a different story. Neither level invalidates the other. They describe different layers of the same system.
Emergence: when levels genuinely diverge
The deepest reason level disambiguation matters is that complex systems routinely produce properties at higher levels that do not exist at lower levels. This is emergence — and it means that claims about different levels of a system can be not just contextually different but fundamentally incommensurable.
A single ant follows simple chemical-gradient rules. An ant colony exhibits sophisticated architecture, agriculture, and warfare. Describing ant behavior at the individual level produces statements like "ants follow pheromone trails." Describing ant behavior at the colony level produces statements like "the colony allocates foragers based on food availability." Both are true. Neither reduces to the other. The colony-level behavior is not merely the sum of individual behaviors — it emerges from their interaction in ways that cannot be predicted from the individual rules alone.
Research on causal emergence (Hoel et al., 2013; Rosas et al., 2024) has formalized this intuition. In certain systems, macro-level descriptions carry more causal information than micro-level descriptions. The higher level isn't just a convenient summary — it's where the actual causal structure lives. Trying to explain the macro in purely micro terms loses information rather than gaining it.
This has a direct implication for contradiction resolution: when two claims operate at different levels of an emergent system, forcing them into the same frame isn't just imprecise — it's structurally wrong. "Individual neurons don't understand language" and "GPT-4 produces coherent paragraphs" are not in tension. They describe different levels of a system where the higher level has properties the lower level doesn't.
AI systems: a modern laboratory for level confusion
Large language models provide a vivid, contemporary example of how level confusion generates apparent contradictions.
At the token level, an LLM predicts the next token based on statistical patterns in its training data. At the attention-head level, specific heads implement identifiable algorithms — induction heads that copy sequences, attention patterns that track syntactic structure. At the layer level, early layers encode broad distributional features while later layers refine toward precise outputs. And at the model-behavior level, the system exhibits capabilities — reasoning, translation, code generation — that don't obviously map to any single lower-level component.
Mechanistic interpretability research (Elhage et al., 2022; Conmy et al., 2023) has shown that individual attention heads and layers implement interpretable sub-computations. But the model's overall behavior — what it can and can't do, where it fails, what it "understands" — is an emergent property of the interaction of millions of such components. You cannot look at a single attention head and predict that the model will be able to write a sonnet.
This is why debates about whether LLMs "really understand" language are often level confusions. At the token level, the model is doing next-token prediction — a statistical operation with no semantic content. At the behavioral level, the model produces outputs that are functionally indistinguishable from understanding in many contexts. Both descriptions are accurate. The apparent contradiction exists because the debaters are describing different levels of the same system and treating those descriptions as competing claims about one thing.
The lesson for your own thinking: whenever you find yourself in a "does X really Y?" debate — does AI really think, does free will really exist, do corporations really have intentions — check whether the disagreement is about the phenomenon or about the level of description being used.
The level-collapse failure mode
The most common error is what you might call level collapse: treating a multi-level system as if all claims about it must be consistent within a single level.
Level collapse happens constantly in organizational life. A CEO says "our culture is collaborative" (identity level). A team lead says "people hoard information" (behavior level). A consultant says "the incentive structure rewards individual performance" (systems level). All three are accurate descriptions of the same organization at different levels. But if you collapse them into a single-level debate — "is our culture collaborative or not?" — you get an unresolvable argument where everyone has evidence and no one can persuade anyone else.
It happens in personal epistemology too. You believe "I'm a generous person" (identity level) and simultaneously notice "I rarely donate money" (behavior level). Treated at the same level, this is a damning contradiction. Treated at their actual levels, it's a diagnostic: your identity-level belief and your behavior-level pattern are misaligned, which tells you where to investigate — not which claim to discard.
Robert Dilts, building on Bateson's work, formalized a model of neurological levels — environment, behavior, capabilities, beliefs, identity, purpose — where each level organizes and constrains the levels below it. The model's practical value isn't its specific level names (which are debatable) but its structural claim: changes at higher levels cascade downward, but changes at lower levels don't necessarily propagate upward. A behavior change doesn't necessarily shift a belief. A belief change necessarily shows up in behavior. Understanding which level a claim operates at tells you what kind of intervention — or what kind of resolution — is needed.
How to disambiguate levels in practice
When you encounter an apparent contradiction, run this protocol:
1. Name both claims explicitly. Write them down. Vague senses of contradiction resist level analysis. You need concrete statements.
2. Ask: what system is each claim describing? Not just "what topic" but "what layer of that topic." Is this about an individual's behavior or a system's structure? About what something does or what something is? About a local pattern or a global property?
3. Check for emergence. Could both claims be true if they describe different levels of a system where higher-level properties don't reduce to lower-level ones? If yes, you don't have a contradiction — you have a multi-level description.
4. Check for type errors. Is one claim being used to evaluate or refute the other in a way that crosses levels? "Individual atoms aren't alive" doesn't refute "organisms are alive." "Single neurons don't think" doesn't refute "brains produce thought." If the refutation requires crossing levels, it's a type error, not a valid rebuttal.
5. If the claims do operate at the same level, you have a real contradiction. Level disambiguation doesn't dissolve all conflicts. Some claims genuinely collide within a single level. The value of checking levels first is that it filters out the large class of pseudo-contradictions, so you can focus your cognitive resources on the contradictions that actually require resolution.
When levels don't dissolve the contradiction
Not every apparent contradiction is a level confusion. "The universe had a beginning" and "the universe has always existed" operate at the same level (cosmological description) and represent a genuine empirical disagreement. "Vaccines cause autism" and "vaccines don't cause autism" operate at the same level (epidemiological claim) and one is simply wrong.
Level disambiguation is a filter, not a universal solvent. Its power is proportional to how often you encounter multi-level systems — which, in practice, is nearly always. Organizations, minds, economies, software architectures, ecosystems, and belief systems are all multi-level. The majority of persistent disagreements about these systems involve some degree of level confusion. Clearing that confusion doesn't resolve everything, but it eliminates the class of contradictions that were never real in the first place.
From levels to time
This lesson addresses contradictions that arise from confusing the layers of a system. The next lesson — L-0369, Time disambiguation — addresses contradictions that arise from confusing the moments of a system. Something can be true now and have been false before. A claim that was correct in 2015 may be incorrect in 2026. Just as level disambiguation asks "which layer?", time disambiguation asks "which moment?" Together, they form two of the most powerful filters in your contradiction-resolution toolkit: check the level, then check the time. What remains after both filters is where the real intellectual work begins.