You are not running all your own cognitive processes
Right now, dozens of agents are shaping your behavior. Some of them live inside your head: the gut check that fires when a deal feels too good, the automatic suspicion when someone starts a sentence with "to be honest," the mental rule that says "never send an email when angry." These are internal agents. They run on neural hardware, fire based on pattern recognition, and operate with varying degrees of reliability.
But other agents shaping your behavior live entirely outside your skull. The calendar reminder that prompts your weekly review. The checklist taped to the wall above your desk. The automation that moves emails from a specific client into a priority folder. The phone alarm that tells you to stand up every ninety minutes. These are external agents. They run on software, paper, or environmental design. They encode the same trigger-condition-action structure as internal agents, but they execute on a different substrate.
The distinction matters because most people implicitly believe that "real" cognition happens inside the head — and everything else is just a reminder or a crutch. This belief is not just philosophically wrong. It is practically dangerous. It causes you to over-rely on the least reliable agent platform you have (your biological memory under stress) while under-utilizing the most reliable platforms available to you (tools, environments, and systems that do not forget, get tired, or lose focus).
The extended mind: your notebook is part of your cognition
In 1998, philosophers Andy Clark and David Chalmers published "The Extended Mind," one of the most cited papers in modern philosophy of mind. Their argument was direct: if an external object plays the same functional role as an internal cognitive process, it is part of your cognitive system. Not metaphorically. Literally.
Their famous thought experiment involves two characters. Inga wants to go to a museum. She thinks for a moment, recalls from biological memory that the museum is on 53rd Street, and walks there. Otto has Alzheimer's disease. He wants to go to the same museum. He consults his notebook, reads that the museum is on 53rd Street, and walks there. Clark and Chalmers argue that Otto's notebook plays the same functional role as Inga's biological memory. Both store a belief. Both are consulted when needed. Both produce the same action. The notebook is not an aid to Otto's cognition — it is part of his cognition.
This is the extended mind thesis: the boundary of your mind is not your skull. It is wherever your cognitive processes reliably execute. When you use a tool so fluently that it becomes transparent — you reach for it automatically, trust it implicitly, and rely on it consistently — that tool is a genuine component of your cognitive system.
The implications for agent design are immediate. An internal agent ("when I feel overwhelmed, I take three deep breaths and prioritize the top task") and an external agent ("when my task list exceeds ten items, Todoist highlights the top three and grays out the rest") are not different in kind. They are different in substrate. One runs on neurons. The other runs on software. Both are agents. Both are yours.
Distributed cognition: the cockpit remembers
Edwin Hutchins took this further. In his 1995 book Cognition in the Wild and his landmark study "How a Cockpit Remembers Its Speeds," Hutchins showed that cognition in complex real-world systems is not located inside any single head. It is distributed across people, tools, and environmental structures.
In the cockpit study, Hutchins analyzed how pilots manage the different speeds required during approach and landing. The answer was not that pilots memorize these speeds. The cockpit system remembers them. Speed bugs — small movable markers on the airspeed indicator — are set during pre-flight preparation. During approach, pilots do not recall target speeds from memory. They read them off the instrument. As Hutchins wrote: "Speed bugs do not help pilots remember speeds; rather, they are part of the process by which the cockpit system remembers speeds."
This is not a metaphor. The cockpit, as a cognitive system, has memory, attention, and decision-making processes distributed across instruments, checklists, crew communication protocols, and individual pilot knowledge. No single component — including the pilot's brain — contains the full cognitive process. The system thinks.
You operate the same way, whether you recognize it or not. Your phone's calendar, your team's shared project board, the sticky note on your monitor, the physical layout of your workspace — these are not accessories to your thinking. They are components of a distributed cognitive system. The question is whether you designed that system deliberately or let it assemble itself by accident.
Internal agents: fast, flexible, fragile
Internal agents are mental rules, automatic responses, trained intuitions, and internalized procedures that fire inside your head without external support. They have genuine advantages.
Speed. An internal agent can fire in milliseconds. The experienced negotiator who senses a bluff does not consult a checklist. The pattern recognition is immediate, pre-conscious, and fast enough to shape a response in real time.
Flexibility. Internal agents can adapt to context in ways that rigid external systems cannot. Your internal "something is off about this person" agent integrates hundreds of subtle cues — tone, posture, word choice, timing — that no checklist could enumerate.
Portability. Internal agents go wherever you go. You do not need Wi-Fi, a notebook, or a charged battery. They are always available, always on.
But internal agents are also fragile in specific, predictable ways.
They degrade under cognitive load. When you are stressed, tired, or overwhelmed, your internal agents misfire, fail to fire, or fire on the wrong triggers. The rule "don't send angry emails" is an internal agent that fails precisely when you need it most — when you are actually angry.
They suffer from interference. Gollwitzer's research on implementation intentions (1999) showed that vague internal goals ("I want to exercise more") fail because they compete with other active goals and get crowded out. Only when a goal is reformulated as a specific if-then plan — "If it is 7am on a weekday, then I put on running shoes" — does it begin to fire reliably. But even these internal implementation intentions degrade over time without reinforcement.
They are invisible to audit. You cannot easily inventory your internal agents. You do not know how many you have, which ones are active, or which ones conflict with each other. They operate below conscious awareness, which makes them powerful but also unaccountable.
External agents: reliable, auditable, rigid
External agents are trigger-condition-action patterns embedded in tools, environments, or systems outside your head. They have complementary advantages.
Reliability. A calendar reminder fires every Friday at 4pm regardless of your mood, your stress level, or whether you remembered to think about it. It does not degrade under cognitive load. It does not get crowded out by competing priorities. It just fires.
Auditability. You can see your external agents. You can list them, review them, test them, and improve them. A checklist is an external agent you can inspect. An automation rule is an external agent you can version-control. This transparency makes external agents available for systematic improvement in a way that internal agents are not.
Shareability. External agents can be transferred between people. Atul Gawande's research on surgical checklists demonstrated that a simple external agent — a printed list of steps to complete before incision — reduced surgical complications by 36% and deaths by 47% across eight hospitals worldwide. The checklist externalized the "have we checked everything?" agent so it no longer depended on any individual surgeon's memory, fatigue level, or sense of confidence.
But external agents have their own limitations.
They are rigid. A calendar reminder does not know you are in the middle of a crisis and should skip the weekly review this one time. An automation rule processes the trigger exactly as programmed, without judgment.
They require maintenance. External agents that are not reviewed become noise. The reminder you ignore every week. The automation that fires on outdated conditions. Unmaintained external agents do not just fail — they erode your trust in all external agents, which pushes you back to relying on fragile internal ones.
They introduce dependency. Daniel Wegner's research on transactive memory systems (1985) showed that couples develop shared memory structures — one partner remembers financial details, the other remembers social commitments. This is powerful, but when the relationship ends, both partners experience a period of cognitive disruption. The same happens when you lose access to a tool that was carrying cognitive load for you. If your external agent disappears and you have no internal backup, the process breaks.
The AI parallel: on-device versus cloud
This internal-external distinction maps precisely onto one of the most important architectural decisions in modern AI: on-device inference versus cloud inference.
On-device AI (edge computing) runs models locally — on your phone, your laptop, your car's processor. It is fast, private, and works offline. But it is constrained by local hardware: smaller models, less memory, limited capability.
Cloud AI sends data to remote servers for processing. It can run massive models with near-unlimited compute. But it requires connectivity, introduces latency, and creates dependency on infrastructure you do not control.
The engineering community has learned that the answer is not "pick one." It is a hybrid architecture. You run lightweight, fast, reliable processes on-device for latency-sensitive decisions, and you route complex, resource-intensive processing to the cloud. A January 2025 study found that hybrid edge-cloud architectures for AI workloads can achieve energy savings of up to 75% and cost reductions exceeding 80% compared to pure-cloud processing.
Your cognitive architecture works the same way. Internal agents are your edge compute: fast, always available, good for pattern recognition and rapid response. External agents are your cloud: reliable, powerful, auditable, good for processes that require consistency, memory, and precision. The question is not whether to use one or the other. The question is which processes belong where.
The design principle: match substrate to requirement
Here is the practical framework. For any agent in your life, ask two questions:
How critical is reliability? If the consequence of the agent failing to fire is severe — a missed medication dose, a forgotten commitment to your team, a skipped safety check — externalize it. Do not trust critical processes to biological memory under load. Gawande's surgical checklists exist because the cost of an internal agent misfiring in an operating room is death.
How much does context-sensitivity matter? If the agent needs to read subtle social cues, adapt to ambiguous situations, or integrate information that cannot be formalized into explicit rules — keep it internal. Your "this person is not being honest with me" agent is a pattern recognizer that processes micro-expressions, tonal shifts, and conversational patterns simultaneously. No checklist replicates that.
Most agents benefit from a hybrid approach. You maintain an internal version for speed and flexibility, and you back it up with an external version for reliability and auditability. The internal agent handles the nuance; the external agent catches the cases where the internal one fails.
A pilot has internalized thousands of hours of flight procedure. That is a rich network of internal agents. But the pilot also runs checklists before every takeoff, consults instruments during approach, and follows crew communication protocols for every critical phase. The internal and external agents work as a system — each compensating for the other's weaknesses.
What this makes possible
When you stop treating "in your head" as the only legitimate location for cognition, your capacity expands in three ways:
You free up cognitive bandwidth. Every process you externalize to a reliable tool is working memory you reclaim for creative thought, complex reasoning, and genuine judgment — the work that actually requires a human brain.
You make your cognitive system auditable. You can now see what is running, test whether it is working, and improve what is failing. You cannot debug what you cannot see. External agents are visible by default.
You build resilience through redundancy. When critical agents run on both internal and external substrates, no single point of failure takes the system down. Your internal habit misfires on a bad day? The calendar reminder catches it. Your phone dies? The internalized practice carries you through.
The next lesson — the agent audit — gives you a systematic method for inventorying every agent currently running in your life, both internal and external, designed and default. You cannot optimize a system you have not mapped. Mapping starts with recognizing that the system extends beyond your skull.
Sources:
- Clark, A. & Chalmers, D. (1998). "The Extended Mind." Analysis, 58(1), 7-19.
- Hutchins, E. (1995). "How a Cockpit Remembers Its Speeds." Cognitive Science, 19(3), 265-288.
- Hutchins, E. (1995). Cognition in the Wild. MIT Press.
- Gollwitzer, P. M. (1999). "Implementation Intentions: Strong Effects of Simple Plans." American Psychologist, 54(7), 493-503.
- Wegner, D. M. (1985). "Transactive Memory: A Contemporary Analysis of the Group Mind." In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior.
- Gawande, A. (2009). The Checklist Manifesto. Metropolitan Books.
- Gollwitzer, P. M. & Sheeran, P. (2006). "Implementation Intentions and Goal Achievement: A Meta-Analysis." Advances in Experimental Social Psychology, 38, 69-119.