You didn't choose most of the authorities you obey
Right now, a handful of voices shape the majority of your beliefs, decisions, and reactions. You could probably name two or three of them if pressed. But the full list — every podcast host, journalist, algorithm, institution, friend, parent, mentor, AI assistant, and anonymous commenter whose judgments you absorb and act on — that list is much longer than you think, and you almost certainly never sat down and chose it deliberately.
This is the default condition. Nobody hands you a form on your eighteenth birthday asking you to list every source you'll grant epistemic authority to for the next decade. Instead, authority accumulates through proximity, repetition, emotional resonance, and social proof. The voices you hear most often become the voices you trust most, not because you evaluated them and found them worthy, but because familiarity feels like reliability.
An authority audit changes this. It is the practice of making every unconscious authority delegation visible, writing it down, and evaluating whether each delegation is still warranted — or whether it ever was.
What cognitive authority actually is
Patrick Wilson introduced the concept of cognitive authority in his 1983 book Second-hand Knowledge: An Inquiry into Cognitive Authority. Wilson drew a sharp distinction between two kinds of knowing: what you learn from first-hand experience and what you accept second-hand from others. Cognitive authority is the influence you recognize as proper — the sources you believe are credible and whose claims you treat as worthy of belief without independent verification.
A 2025 scoping review by Hirvonen in the Journal of the Association for Information Science and Technology, covering 25 years of empirical research on cognitive authority, identified six distinct facets of how people ascribe it: trustworthiness, reliability, scholarliness, credibility, officialness, and authoritativeness. Of these, trustworthiness emerged as the primary facet — people grant cognitive authority first on the basis of perceived trustworthiness, not demonstrated expertise.
This is a critical finding for anyone attempting to audit their own authority delegations. It means that the sources you trust most are likely the ones that feel most trustworthy to you — not necessarily the ones that have the strongest evidential track record. Warmth, likability, consistency of tone, and narrative skill all inflate perceived trustworthiness independent of actual accuracy.
The case for epistemic dependence — and its limits
Before you can audit your authorities responsibly, you need to understand why having authorities is not merely acceptable but necessary.
John Hardwig, in his landmark 1985 paper "Epistemic Dependence," made the uncomfortable argument that rationality sometimes requires refusing to think for yourself. His reasoning: the world is too vast, time too constrained, and individual intellect too limited for any person to gather first-hand evidence for more than a tiny fraction of their beliefs. Depending on epistemic authorities — people who know more about a subject than you do — is not intellectual weakness. It is the epistemically responsible response to your actual cognitive situation.
Hardwig went further. He argued that trust is often more epistemically basic than empirical evidence or logical argument. Since the reliability of testimony depends on the trustworthiness of the testifier, knowledge itself rests partly on an ethical foundation — the character of the people you depend on.
This means the authority audit is not an exercise in radical skepticism. You are not trying to eliminate dependence on others. You are trying to ensure that the dependencies you maintain are conscious, calibrated, and proportionate to actual expertise. The difference between sovereignty and credulity is not whether you trust — it is whether you chose whom to trust and why.
The three layers of unconscious delegation
Most authority delegations operate at one of three levels, each progressively harder to detect:
Layer 1: Named authorities. These are the sources you could identify if asked. Your doctor, your financial advisor, the columnist you read every morning, your partner's opinion on relationship matters. These are the easiest to audit because you're already aware of them. The question is whether your trust is proportionate to their actual expertise — and whether you've confused domain-specific competence with general wisdom. Your doctor may be excellent at diagnosing conditions and poor at evaluating nutritional research. Your financial advisor may understand portfolio allocation and be completely wrong about macroeconomics. Experts routinely trespass beyond their domains, and listeners rarely notice.
Layer 2: Ambient authorities. These are the sources that shape your beliefs through repeated exposure rather than deliberate consultation. The algorithmic feed that surfaces certain political framings daily. The industry newsletter whose assumptions you absorb without questioning. The Slack channel where a few vocal colleagues establish what counts as a reasonable opinion. The subreddit where community consensus acts as a proxy for truth. You didn't consciously delegate authority to these sources — they accumulated authority through sheer volume of contact.
Layer 3: Inherited authorities. These are the deepest and most invisible. The parent whose epistemology you absorbed before you could evaluate it. The religious tradition whose metaphysical assumptions still frame how you assess evidence — whether you practice or not. The cultural narrative about what counts as success, intelligence, or moral seriousness. The schooling system that taught you how to evaluate claims, shaping the very lens through which you assess other authorities. You cannot audit inherited authorities until you recognize that you have them.
How to conduct an authority audit
The audit itself is structured but not complicated. What makes it powerful is the act of externalization — taking what is implicit and making it available for inspection.
Step 1: Choose a single domain. Do not try to audit everything at once. Pick one area where your beliefs and decisions have high stakes: health, finances, career direction, child-rearing philosophy, political views, or technical practices. Constraining the scope makes the exercise manageable and increases the chance you'll actually finish it.
Step 2: List every source. Write down every person, institution, publication, platform, algorithm, and AI tool that has influenced your current beliefs in this domain. Include sources you consult actively (books, advisors, search queries) and sources that reach you passively (feeds, conversations, ambient culture). Be specific: not "social media" but "the three Twitter accounts I actually read on this topic." Not "my education" but "the specific professor or textbook that shaped how I think about this."
Step 3: Evaluate each source against three criteria. For each source on your list, answer:
- Basis of trust: Why do I trust this source? Is it because of demonstrated expertise, because of emotional resonance, because everyone around me trusts them, because they were the first source I encountered, or because an algorithm placed them in front of me repeatedly?
- Scope of expertise: What is this source actually qualified to speak about? Where does their competence end and their opinion begin? A brilliant machine learning researcher may have no special insight into AI policy. A successful founder may have survivorship bias masquerading as business wisdom.
- Verification recency: When did I last check this source's claims against independent evidence? A source you trusted five years ago may have shifted positions, lost credibility, or been superseded by better evidence without you noticing.
Step 4: Assign delegation levels. For each source, decide whether your current level of delegation is appropriate. Full delegation means you accept their claims without independent verification — appropriate only when you have strong evidence of their expertise and track record within a specific domain. Partial delegation means you treat their claims as a strong prior worth checking. Minimal delegation means you treat them as one data point among many.
Step 5: Identify mismatches. Look for sources where your actual delegation level exceeds what the evidence warrants. Look also for sources you under-trust — experts or institutions whose credentials and track record would justify more deference than you currently give them. Both patterns are common. Over-trusting charismatic non-experts and under-trusting credentialed but boring experts is perhaps the most prevalent mismatch in modern information consumption.
The AI authority problem
Large language models introduce a new category of authority delegation that existing evaluation frameworks were not designed for. When you ask an AI assistant a question and act on the answer, you are delegating cognitive authority to a system that has no expertise, no accountability, and no domain boundaries — yet communicates with the linguistic markers (structured reasoning, domain vocabulary, confident tone) that humans associate with expert authority.
Research on AI trust calibration reveals the mechanism. A 2025 study published in Theoretical and Applied Economics found that users progressively transfer epistemic authority from traditional sources to AI systems, and that over time, this transfer normalizes the AI's role as a primary information source, reducing the perceived need for independent verification. The paper identifies a specific risk: when models express 90%+ linguistic confidence, actual error rates run 20-30%. The gap between how confident an LLM sounds and how accurate it actually is represents a systematic miscalibration that human trust heuristics are not equipped to detect.
This means your authority audit must include AI tools as sources. If you use an AI assistant for research, drafting, decision support, or code review, it belongs on your list. The evaluation questions apply: Why do I trust this output? What is the scope of its actual reliability? When did I last verify its claims? The answer to the last question should give you pause, because most people verify AI outputs far less frequently than they verify human experts — despite AI having a higher base rate of confident error.
The productive response is not to stop using AI but to assign it the correct delegation level: partial at most, with systematic verification in any domain where errors carry real consequences.
Why standard source-evaluation methods fall short
Information literacy education has traditionally relied on frameworks like the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose), developed by Sarah Blakeslee in 2004. These frameworks are useful starting points but insufficient for a genuine authority audit, for two reasons.
First, they evaluate individual claims or documents, not your pattern of delegation across an entire domain. Checking whether a single article is current and authoritative does not reveal that 80% of your beliefs about economics come from a single podcast host with no formal training.
Second, as Sam Wineburg and colleagues demonstrated in their 2020 Stanford research, checklist-based evaluation methods can actually make people more susceptible to misinformation by creating false confidence. Skilled propagandists can build sources that pass every CRAAP criterion. Wineburg's alternative — lateral reading, where you leave the source and check what others say about it — is closer to what an authority audit demands, because it requires you to evaluate the source itself rather than just the surface features of its output.
The authority audit goes further than lateral reading by asking you to evaluate not just whether a source is credible but whether you have given it the right amount of influence over your thinking relative to its actual scope of competence.
The Milgram lesson: authority bypasses deliberation
Stanley Milgram's obedience experiments, conducted in the 1960s and replicated across cultures for decades afterward, revealed something that most people interpret too narrowly. The standard reading is that people obey authority figures even when ordered to harm others. The deeper lesson — the one relevant to the authority audit — is about the mechanism by which authority operates.
Milgram showed that legitimate authority does not work by persuading you. It works by defining the situation for you. When a recognized authority provides a frame — "this is a learning experiment," "this is how experts do it," "this is the standard approach" — most people accept that frame and act within it without deliberating about whether the frame itself is valid. The authority does not override your judgment. It preempts your judgment by making deliberation feel unnecessary.
This is exactly what happens with your cognitive authorities. You don't evaluate your doctor's advice and then decide to follow it. You follow it because your doctor's status as a medical authority makes the evaluation step feel redundant. You don't fact-check the newsletter you trust. You absorb its framing because your trust in the source makes fact-checking feel like a waste of time. Authority doesn't suppress your thinking — it makes thinking feel optional.
The authority audit reintroduces the step that authority removed.
What changes after the audit
Three things shift when you make your authority delegations explicit:
Proportionality becomes possible. You can match your level of trust to a source's actual scope of competence. You can trust your cardiologist on heart conditions without trusting her opinions on nutrition research. You can value your mentor's career advice without deferring to his political views. This is not disrespect — it is recognizing that expertise has boundaries, and that treating someone as an authority across all domains is a category error.
Drift becomes detectable. Your information environment changes constantly. Algorithms adjust what you see. Trusted sources change their editorial direction. New evidence supersedes old positions. Without a periodic audit, your authority map becomes stale — you continue trusting sources that no longer warrant the level of trust you originally assigned, simply because the initial delegation was never revisited.
Courage becomes concrete. The successor to this lesson is about the emotional dimension of self-authority — the courage required to act on your own judgment when your authorities disagree. That courage is far easier to access when you know exactly who you are disagreeing with and why your own assessment might be better calibrated in this specific instance. Vague rebellion against "authority in general" is posturing. Specific, informed disagreement with a particular source on a particular claim in a domain where you've done your own work — that is sovereignty.
Your third brain and the authority audit
AI tools can assist the audit process in ways that are genuinely useful — provided you remain clear about what they are and are not capable of.
An AI assistant can help you generate your source list by prompting you with categories you might have overlooked: "What sources shape your views on [domain]? Consider: formal advisors, informal mentors, media you consume daily, platforms whose consensus you absorb, family members, cultural narratives, AI tools themselves." The structured prompting can surface ambient and inherited authorities that you'd miss on your own.
AI can also help you research the credentials and track record of sources on your list — checking whether a particular author's claims are supported by peer-reviewed evidence, or whether an institution's recommendations align with current scientific consensus. This is lateral reading at scale, and it is one of the genuinely high-value uses of AI as a cognitive tool.
What AI cannot do is make the evaluation for you. The question "How much authority should I grant this source over my thinking?" is a judgment that requires weighing your own values, your risk tolerance, your specific context, and your personal epistemic standards. Outsourcing that judgment to an AI would be adding another unaudited authority to the very list you are trying to evaluate — the epistemic equivalent of asking the fox to guard the henhouse inventory.
Use AI to expand the audit's inputs. Keep the evaluation itself in your own hands.
An authority audit is not a one-time event. It is a recurring practice — a systematic way to ensure that the voices shaping your mind are there because you evaluated them and chose them, not because they accumulated through proximity, habit, or algorithmic placement. The goal is not to trust less. It is to trust deliberately.