Audit beliefs for algorithmic origins — positions acquired through engineered exposure haven't passed your epistemic standards
Audit your beliefs for algorithmic origins by identifying which positions trace primarily to repeated exposure within algorithmically curated environments, because beliefs acquired through engineered exposure rather than deliberate inquiry have not been subjected to sovereign epistemic standards.
Why This Is a Rule
The mere exposure effect (Zajonc, 1968) demonstrates that repeated exposure to a stimulus increases liking and perceived truth — regardless of the stimulus's actual validity. Algorithmic content curation amplifies this effect by engineering exposure: algorithms show you content they predict you'll engage with, creating feedback loops where your existing inclinations are reinforced through repeated exposure to confirming content.
The result: you hold beliefs that feel like your own reasoned conclusions but were actually installed through engineered repetition. "I believe X" — but you believe X primarily because your feed showed you X-confirming content 200 times while showing X-disconfirming content 3 times. The belief was shaped by exposure ratios, not by deliberate evaluation of evidence.
The audit identifies these algorithmically-influenced beliefs by tracing their origins. A belief acquired through deliberate inquiry — reading multiple perspectives, evaluating evidence, reasoning through implications — has been subjected to your epistemic standards. A belief acquired through repeated exposure in algorithmic feeds has been subjected to an engagement-optimization algorithm's standards, which are uncorrelated with truth.
When This Fires
- During periodic belief audits (quarterly or semi-annually)
- When a strong opinion feels more like a reflex than a reasoned conclusion
- When you can't articulate the evidence for a belief you hold strongly
- When your positions align suspiciously well with the dominant narratives of your algorithmic feed
Common Failure Mode
Assuming all your beliefs are deliberate: "I formed this opinion through careful thought." Did you? Or did you form it through 200 exposures to the same perspective in your Twitter feed, each one feeling like an independent data point but actually being the same algorithmically-amplified signal? The algorithmic origin is invisible because each individual exposure feels organic.
The Protocol
(1) List your 10 strongest opinions on contested topics (politics, technology, culture, professional practices). (2) For each, trace the origin: "Where did I first encounter this position? What evidence did I evaluate? Did I seriously consider the opposing view?" (3) Flag beliefs where the honest answer is: "I saw it repeatedly on [platform], I engaged with similar content often, and I haven't seriously evaluated the opposing case." These are algorithmically-influenced beliefs. (4) For each flagged belief: seek out the strongest opposing arguments (Your steel man isn't ready until advocates say 'Yes, that is exactly what I mean' — Rapoport's first rule steel-manning). Evaluate them with the same seriousness you'd give to any evidence. (5) After evaluation: if the belief survives deliberate scrutiny → keep it, now as a genuinely evaluated position. If it doesn't survive → update. Either way, the belief is now yours through deliberate inquiry rather than algorithmic installation.