You do not start from zero. You start from what already works.
L-0593 introduced portfolio rebalancing — the practice of periodically assessing whether your mix of active agents matches your current priorities. Sometimes rebalancing reveals a gap: you need a new agent that your portfolio does not contain. The instinct in that moment is to build from scratch. To sit down with a blank page and design a new habit, a new routine, a new behavioral pattern from nothing.
This instinct is wrong. It ignores the most valuable resource you possess — the library of working agents you have already built.
New agents can inherit properties and patterns from existing successful agents. This is not a metaphor. It is a design strategy with deep roots in software engineering, evolutionary biology, organizational theory, and behavioral science. The principle is the same across all these domains: when you need something new, do not reinvent what already works. Extract the proven components from existing systems and use them as the foundation for the new one.
Inheritance in software: the original pattern
The concept of inheritance was formalized in object-oriented programming in the 1960s and 1970s, reaching mainstream adoption through languages like Smalltalk, C++, and Java. In software, inheritance means that a new class can automatically acquire the properties and behaviors of an existing class. The child class starts with everything the parent class already has — its data structures, its methods, its validated behaviors — and then adds or modifies only what is specific to its new purpose.
The power of this pattern is obvious: the child class does not need to re-implement the functionality that the parent already proved. A new type of user interface button does not need to rewrite the code for rendering, responding to clicks, or managing its visual state. It inherits all of that from the generic button class and adds only the new behavior — perhaps a different color on hover, or a confirmation dialog before executing its action.
But the history of software inheritance also carries an important warning. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides — the "Gang of Four" who wrote Design Patterns: Elements of Reusable Object-Oriented Software in 1994 — observed that programmers consistently overuse inheritance. They documented what they called the "fragile base class problem": when you inherit too deeply from a parent, changes to the parent break the child in unexpected ways. The child becomes so dependent on the parent's internal structure that it cannot evolve independently.
Their prescription became one of the most quoted principles in software engineering: "Favor composition over inheritance." Do not make the new thing a copy of the old thing with modifications. Instead, take the useful components from the old thing and compose them into the new thing. The distinction matters. Inheritance says "the new agent is the old agent plus changes." Composition says "the new agent uses components from the old agent." The second approach is more flexible, more resilient, and less likely to produce an agent that breaks when its parent evolves.
For your cognitive agents, this distinction is directly applicable. When you build a new morning writing routine by inheriting from your morning exercise routine, you do not want the writing routine to be a modified exercise routine. You want the writing routine to be its own agent that happens to reuse specific components — the early trigger time, the notification-free environment, the time-bounded format — that the exercise routine already proved effective.
Transfer learning: inheritance across domains
Machine learning discovered the same principle independently. In the early days of deep learning, every new neural network was trained from scratch — initialized with random weights and forced to learn everything from raw data. This worked when data was abundant and compute was cheap. When either was scarce, training from scratch produced models that were slow to converge, prone to overfitting, and expensive to develop.
Transfer learning changed the equation. The insight, formalized in research by Jason Yosinski and colleagues in their 2014 paper "How Transferable Are Features in Deep Neural Networks?", was that the early layers of a neural network learn general features — edge detection, texture recognition, basic structural patterns — that are useful across many different tasks. Only the later layers specialize for the specific task at hand.
This meant that a model trained on one task could donate its early layers to a model being built for a different task. The new model inherits the general knowledge — the foundational patterns that took millions of examples to learn — and only needs to train its final layers on the new, specific dataset. The result: faster training, better performance with less data, and dramatically lower computational cost.
The parallel to cognitive agents is precise. Your established agents have already learned general patterns that new agents can inherit. Your reliable morning routine has already solved the problem of consistent activation under grogginess, time pressure, and competing impulses. Your deep work agent has already figured out how to create a distraction-free environment and sustain focus for extended periods. Your weekly review has already developed the discipline of structured self-assessment.
When you build a new agent, these solved problems are your pretrained layers. You do not need to re-solve "how do I consistently activate an agent at 6 AM" if your exercise agent has already solved it. You transfer that solution — the specific alarm placement, the pre-commitment strategy, the environmental design — and let the new agent focus its learning on what is actually novel about its purpose.
What Yosinski's research also revealed is that transferability decreases with distance. Features from early layers transfer well because they are general. Features from later layers transfer poorly because they are specialized. The same is true for cognitive agents. The general infrastructure of an agent — its trigger mechanism, its environmental requirements, its time-boxing structure — transfers easily to new agents. The specific content of an agent — the exact sequence of exercises, the particular questions in a review template, the specific focus of a reading session — does not transfer and should not be forced to.
Cultural transmission: how humans have always inherited patterns
Long before software engineers named the pattern, human cultures were practicing inheritance at civilizational scale. Luigi Luca Cavalli-Sforza and Marcus Feldman, in their 1981 work Cultural Transmission and Evolution: A Quantitative Approach, identified three modes of cultural inheritance: vertical (parent to child), horizontal (peer to peer within the same generation), and oblique (from non-parental members of an older generation — teachers, mentors, elders).
Each mode transmits different types of knowledge with different fidelity. Vertical transmission is the most conservative — parents pass down deeply held values, languages, and fundamental behavioral patterns with high fidelity across generations. Horizontal transmission is faster but less stable — peers share trends, techniques, and situational adaptations that spread quickly but may not persist. Oblique transmission — learning from teachers, mentors, and exemplars who are not your parents — combines elements of both, providing access to proven patterns from experienced practitioners without the conservative bias of family-only inheritance.
Your cognitive agents inherit through all three channels. Some agents were installed vertically — the conflict-avoidance pattern you learned from watching your parents, the financial habits absorbed through family culture, the morning routines modeled by your household growing up. Others were acquired horizontally — the productivity technique a colleague shared, the meditation practice a friend recommended, the journaling format you picked up from a peer. And some arrived obliquely — the decision-making framework from a book by a thinker you respect, the exercise protocol designed by a coach, the time management system taught by a mentor.
Understanding which channel an agent was inherited through matters because it affects how deeply the agent is embedded and how easily it can be modified. Vertically inherited agents — those absorbed in childhood from family culture — are often the most resistant to change because they are entangled with identity and emotional associations. Horizontally inherited agents — those adopted from peers — are the easiest to modify or replace because they carry less identity weight. Obliquely inherited agents fall somewhere in between.
When you deliberately design agent inheritance — when you choose to build a new agent by inheriting from an existing one — you are practicing intentional oblique transmission on yourself. You are being both the mentor and the student, extracting proven patterns from your own experience and transmitting them to a new context.
Habit stacking: inheritance as behavioral science
BJ Fogg, a behavioral scientist at Stanford, developed a framework he calls "Tiny Habits" that is, at its core, a theory of agent inheritance. The central mechanism is what Fogg calls "anchoring" and what James Clear, drawing on Fogg's work, popularized as "habit stacking" in Atomic Habits (2018).
The formula is deceptively simple: "After I [existing habit], I will [new habit]." After I pour my morning coffee, I will write one sentence in my journal. After I sit down at my desk, I will close all browser tabs. After I finish my workout, I will review my daily priorities.
What this formula actually does is inherit the trigger mechanism from an existing agent and attach it to a new one. The existing habit — pouring coffee, sitting at the desk, finishing the workout — is already a reliable behavioral event. It fires consistently. Its trigger is established, its environment is defined, and its completion is unambiguous. By anchoring the new behavior to this existing event, you are transferring the trigger infrastructure of a proven agent to one that has no trigger infrastructure of its own.
Fogg's research found that tiny habits anchored to existing routines have dramatically higher success rates than habits attempted in isolation. The reason maps directly to the inheritance principle: a new agent that inherits a proven trigger starts with one of its most critical components already solved. It does not need to establish a new cue, fight for a new time slot, or compete for a new environmental context. It inherits all of that from the parent agent.
Clear extends the pattern beyond single triggers to full behavioral chains — sequences of habits where each one triggers the next. Wake up, make bed, brew coffee, journal for two minutes, review daily priorities, begin first work block. This is a composition of agents where each one inherits its trigger from the previous one's completion signal. The chain as a whole is more reliable than any individual link because the inheritance structure means no single agent needs an independent trigger — each one activates automatically when its predecessor completes.
Christopher Alexander's pattern language: inheritable solutions
In 1977, architect Christopher Alexander published A Pattern Language, a book that catalogued 253 recurring design patterns in architecture and urban planning. Each pattern described a problem that occurs repeatedly in a specific context, presented evidence for why the problem matters, and offered a tested solution. The patterns ranged from the large scale (the distribution of towns across a region) to the intimate (the placement of a window seat in a room).
Alexander's insight was that good design is not created from nothing each time. Good design inherits from a library of solutions that have been proven across many contexts. An architect designing a new house does not need to rediscover that south-facing windows improve natural lighting, or that a transition space between public and private areas creates psychological comfort. These are inherited patterns — solutions extracted from accumulated experience and made available for reuse.
The software engineering community recognized the power of this idea almost immediately. Kent Beck and Ward Cunningham adapted Alexander's concept in 1987, and the Gang of Four's Design Patterns (1994) formalized it for software. The core principle transferred perfectly: catalog the solutions that work, describe them precisely enough to be reusable, and make them available so that every new system does not need to solve already-solved problems.
Your cognitive agents deserve the same treatment. Over the course of building and maintaining agents — which is what you have been doing throughout Phase 30 — you have accumulated a library of solutions. You know which trigger mechanisms work for you. You know which environmental conditions support focused work. You know which reward structures sustain motivation and which decay over time. You know which time-boxing formats produce completion and which invite procrastination.
These are your patterns. And every new agent you build should inherit from them rather than rediscovering them through trial and error.
The mechanics of agent inheritance
Inheritance in practice means decomposing a new agent's requirements into components and, for each component, asking: do I already have a proven solution for this?
An agent has five inheritable components:
1. Trigger mechanism. What activates the agent? If you already have an agent with a reliable trigger at the same time or in the same context, inherit that trigger. Do not create a new one.
2. Environmental conditions. Where and under what conditions does the agent operate? If an existing agent has already established a productive environment — a specific workspace, a notification-free period, a physical setup — inherit that environment.
3. Procedural sequence. What steps does the agent follow? Some procedural elements are transferable: the habit of starting with the easiest task, the practice of setting a timer, the discipline of writing down the output before moving on.
4. Exit criteria. How does the agent know it is done? The time-boxing format of one agent ("work for 45 minutes, then stop") can be inherited by another. The output-based completion of one agent ("stop when the draft is complete") can transfer to a similar one.
5. Recovery protocol. What happens when the agent fails to fire? If your exercise agent has a proven recovery method — "if I miss Monday, I do a shorter session Tuesday without guilt" — that same protocol can be inherited by any agent that needs a failure recovery mechanism.
Not every component should be inherited. The mistake the Gang of Four warned about — over-inheritance creating fragile coupling — applies here too. Inherit what is general. Design fresh what is specific. The trigger and environment are usually safe to inherit. The procedural sequence often needs to be redesigned for the new purpose. The exit criteria depend entirely on the new agent's output requirements.
Selective inheritance prevents fragile agents
The fragile base class problem in software has a direct behavioral analog. When a new agent inherits too heavily from an existing one, changes to the parent agent cascade into the child. If your new writing habit inherits its entire structure from your exercise habit — same time, same place, same intensity model, same reward — then any disruption to the exercise habit destabilizes the writing habit too. A knee injury that pauses your exercise routine should not collapse your writing practice, but it will if the two agents are too tightly coupled through inheritance.
Selective inheritance solves this. Inherit the components that are genuinely reusable — the time slot, the environmental setup, the trigger mechanism — and build independent infrastructure for everything else. The new agent should be able to survive changes to its parent because it is not dependent on the parent's continued operation. It used the parent as a starting template, not as a permanent dependency.
This is the difference between inheritance and composition that the Gang of Four identified. Inheritance creates an ongoing relationship: the child depends on the parent. Composition creates a one-time transfer: the child took components from the parent at creation time but runs independently afterward. For cognitive agents, composition is almost always the better model. Build the new agent using proven components, then let it stand on its own.
From inheritance to templates
Agent inheritance as described here is an informal process — you look at what works, extract the useful components, and apply them to a new agent. But the more agents you build, the more you notice recurring patterns. The same trigger mechanisms keep proving reliable. The same environmental conditions keep supporting focused work. The same time-boxing formats keep producing completion.
When inheritance patterns repeat, they become candidates for formalization. Instead of ad hoc extraction from existing agents, you create explicit templates — reusable blueprints that capture the inheritable components of your best agents in a form that any future agent can adopt.
This is where L-0595 picks up. Agent templates take the inheritance principle and make it systematic. Rather than asking "which of my current agents can I borrow from?" every time you need a new one, you ask "which template fits this use case?" The template library becomes your personal pattern language — a catalog of proven agent architectures that new agents can be built from, the same way Alexander's patterns let architects build new buildings from proven solutions rather than reinventing windows and doorways every time.
Inheritance is the principle. Templates are the infrastructure that makes the principle repeatable at scale.
Sources:
- Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley.
- Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). "How Transferable Are Features in Deep Neural Networks?" Advances in Neural Information Processing Systems, 27.
- Cavalli-Sforza, L. L., & Feldman, M. W. (1981). Cultural Transmission and Evolution: A Quantitative Approach. Princeton University Press.
- Clear, J. (2018). Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones. Avery/Penguin.
- Fogg, B. J. (2020). Tiny Habits: The Small Changes That Change Everything. Houghton Mifflin Harcourt.
- Alexander, C., Ishikawa, S., & Silverstein, M. (1977). A Pattern Language: Towns, Buildings, Construction. Oxford University Press.
- Pan, S. J., & Yang, Q. (2010). "A Survey on Transfer Learning." IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359.