Core Primitive
Complex workflows are built by combining simpler workflows. The output of one becomes the input of another. Composition is the mechanism that turns a library of small, proven workflows into an infrastructure that handles arbitrarily complex work.
Five small workflows beat one large one. Every time.
The previous lesson gave you a workflow library — a curated collection of proven, reusable workflows you can deploy when a familiar situation arises. You have templates for recurring tasks, documented sequences for complex processes, and a growing catalog of operational knowledge that no longer lives only in your memory. But a library of individual workflows, no matter how well-organized, has a ceiling. The ceiling is complexity.
Some work is too complex for any single workflow to handle. Publishing an article, launching a product, onboarding a new hire, planning a quarter, executing a research project — these are not tasks. They are constellations of tasks, each with its own inputs, transformations, and outputs. If you try to capture a constellation in a single monolithic workflow, you get a document that is twenty steps long, impossible to maintain, and fragile in ways you will not discover until something breaks at step fourteen and you have to restart from step one because the intermediate state was never preserved.
The alternative is composition. Instead of building one large workflow to handle complex work, you build small workflows that handle simple work — and then you chain them together. The output of one workflow becomes the input of the next. The complex process emerges not from a complex design but from the assembly of simple, proven parts.
This is not a metaphor. It is an engineering principle with a sixty-year track record, and the moment you learn to apply it to your personal operations, the ceiling on what your workflow library can handle disappears.
The Unix philosophy: do one thing well
In 1978, Doug McIlroy — the inventor of the Unix pipe — articulated a design philosophy that would shape the next half century of software engineering: "Write programs that do one thing and do it well. Write programs to work together." The Unix operating system was built on this principle. Instead of monolithic applications that tried to handle every possible task, Unix provided hundreds of small, focused tools. grep searches text. sort orders lines. uniq removes duplicates. wc counts words. Each tool does one thing. None of them is particularly impressive on its own.
The power is in the pipe — the | character that connects one tool's output to another tool's input. grep "error" server.log | sort | uniq -c | sort -rn takes a log file, finds every line containing "error," sorts those lines, counts the unique ones, and ranks them by frequency. Four simple tools, chained in a single line, performing an analysis that would require a custom program in any language that lacked composability.
McIlroy's insight was not that small tools are better than large tools. It was that small tools with standardized interfaces can be combined to solve problems that no individual tool was designed for. The interface — plain text, streamed line by line through a pipe — is what makes the composition possible. Each tool does not need to know what the other tools do. It only needs to produce output in a format the next tool can accept.
Your personal workflows follow the same logic. A research workflow does not need to know that its output will feed an outlining workflow. It only needs to produce a clearly specified output — structured notes in a consistent format — that any downstream workflow can accept. The interface between the workflows, not the workflows themselves, is what makes composition possible.
Function composition: the mathematical foundation
The principle behind workflow composition is older than computing. In mathematics, function composition is the operation of applying one function to the result of another. If g(x) produces a value and f accepts that value as input, then f(g(x)) is the composition — read "f of g of x." The output of g becomes the input of f. The two functions chain into a pipeline that transforms x through two stages.
Function composition has a property that matters enormously for practical workflow design: each function is independent. You can replace g with a different function g' as long as g' produces output in the same format that f expects. You can replace f with f' as long as f' accepts the same input format that g produces. The functions are modular — swappable, testable, and improvable in isolation — because the interface between them is defined.
This is exactly the property you want in your workflows. When you compose a publishing pipeline from five independent workflows — research, outline, draft, edit, publish — you gain the ability to improve each stage without disrupting the others. You discover a better research method? Swap in the new research workflow. Your editing process evolves? Update the editing workflow. The rest of the pipeline is untouched. The improvement is local, the risk is contained, and the rest of your system continues to function while you refine one part.
Contrast this with a monolithic "publish a blog post" workflow that weaves research, outlining, drafting, editing, and publishing into one continuous sequence. To improve the editing stage, you have to understand and modify a twenty-step document. The change might break the transition from drafting to editing. It might invalidate the assumptions that the publishing steps make about the output of editing. Every change is a system-wide change, because the monolith has no internal boundaries.
Herbert Simon and nearly-decomposable systems
In 1962, Herbert Simon published "The Architecture of Complexity," one of the most cited papers in the history of systems science. Simon's central argument was that complex systems — biological, social, artificial — are nearly always organized as hierarchies of semi-independent subsystems. A human body is not a single undifferentiated mass of cells. It is a hierarchy: cells compose tissues, tissues compose organs, organs compose organ systems, organ systems compose the organism. Each level operates with relative independence. Your liver does not need to coordinate with your bicep on a moment-to-moment basis. The interface between them — blood chemistry, hormonal signals — is narrow and well-defined.
Simon called these "nearly-decomposable systems," and he argued that they evolve faster, fail more gracefully, and are easier to understand than systems that lack hierarchical decomposition. The reason is simple: in a nearly-decomposable system, a failure in one subsystem is contained within that subsystem. A liver problem does not cause your bicep to fail. The subsystem boundary acts as a firewall. In a system without decomposition — where every component depends directly on every other component — a failure anywhere propagates everywhere. The system is not merely complex; it is fragile.
Your workflows are systems. When they are monolithic — every step depending on every other step, no internal boundaries, no preserved intermediate outputs — they are fragile in exactly the way Simon described. A failure at any point propagates backward (wasted work) and forward (blocked progress). When they are composed from semi-independent sub-workflows — each with its own input specification, its own output specification, and its own internal logic — they gain the resilience of nearly-decomposable systems. A failure in the drafting workflow does not invalidate the research. A failure in the editing workflow does not require re-drafting. The boundaries contain the damage.
The Lego principle: standardized interfaces enable infinite combinations
There is a reason that Lego bricks have been the most successful construction toy for over sixty years while countless competitors have come and gone. It is not that individual Lego bricks are interesting. A single 2x4 brick is perhaps the least interesting object you can hold. The power is in the interface — the stud-and-tube coupling system that allows any brick to connect to any other brick. The interface is standardized. A brick manufactured in 1965 connects to a brick manufactured in 2025. A brick from the Space set connects to a brick from the City set. The standardization of the interface is what makes the system infinitely composable.
This is the same principle that makes Unix pipes work, that makes function composition work, and that makes microservices architecture work in modern software systems. In a microservices architecture, large applications are decomposed into small, independent services — a user service, a payment service, a notification service — each running its own process, each communicating through standardized APIs. The payment service does not need to know how the notification service works internally. It only needs to know the API: send this request, receive this response. The standardized interface is what lets teams develop, deploy, and scale each service independently.
For your personal workflows, the "standardized interface" is the input-output specification you built in Workflow inputs and outputs. When each workflow in your library has a clear input specification and a clear output specification, any workflow whose output matches another workflow's input can chain with it. The composition is not hard-coded. It is emergent. You discover new chains — new combinations of existing workflows that solve problems you had not anticipated — because the interfaces are compatible.
This is the difference between a box of specialized tools and a box of Lego bricks. Specialized tools do one predetermined thing. Lego bricks do whatever you assemble them to do. Composable workflows, with standardized interfaces, are Lego bricks for your operations.
The composition test
How do you know whether your workflows are truly composable or merely sequential? Apply the composition test: can you replace one sub-workflow without breaking the others?
If your publishing pipeline is genuinely composed, you should be able to swap your outlining method — switching from a hierarchical outline to a mind map to a question-based structure — without changing anything about the research workflow that precedes it or the drafting workflow that follows it. The swap works if the new outlining method accepts the same input (structured research notes) and produces the same output format (a document structure that the drafting workflow can consume). If the swap forces you to also modify the research workflow or the drafting workflow, then you do not have composition. You have a sequence with hidden dependencies between stages — a monolith disguised as modules.
The test reveals where your interfaces are leaking. A leaking interface is one where assumptions from inside one workflow bleed into another. Your drafting workflow does not just accept "an outline." It accepts "an outline created in a specific tool, formatted in a specific way, with annotations that only exist in your current outlining method." That is not a composable interface. That is a coupling — a hard dependency between the drafting workflow and a specific implementation of the outlining workflow. To make it composable, you define the interface in terms of structure and content, not in terms of how it was produced. "An outline" becomes "a hierarchical list of sections, each with a one-sentence summary and an ordered list of supporting points, delivered as a plain-text document." Now any outlining method that produces this structure can feed the drafting workflow.
Run the composition test on your three most complex processes. Where the test fails, you have found either a missing interface specification or a hidden coupling between stages. Both are fixable. Both become visible only when you test for composability rather than assuming it.
Composition patterns: series, parallel, and conditional
The simplest composition pattern is the series — workflow A feeds workflow B feeds workflow C, in a straight line. This is the publishing pipeline: research, then outline, then draft, then edit, then publish. Each stage depends on the previous one. The chain is linear and the flow is sequential.
But not all complex work is linear. Some stages can run in parallel. When you are preparing for a product launch, the marketing materials workflow and the technical documentation workflow can run simultaneously. Neither depends on the other's output. They share a common input — the product specification — and their outputs converge downstream into a launch readiness review. This is parallel composition, and it is how you compress the time required for complex work. Instead of running every workflow in sequence, you identify which sub-workflows are independent of each other and run them concurrently.
The third pattern is conditional composition — branching based on the output of a previous stage. Your decision workflow produces one of three outcomes: proceed, revise, or abandon. Each outcome triggers a different downstream workflow. "Proceed" triggers the implementation workflow. "Revise" triggers the revision workflow, whose output loops back to the decision workflow for re-evaluation. "Abandon" triggers the archival workflow, which preserves the work done so far and closes the process. The composition includes a branch point, and the branch chosen depends on the output of the preceding stage.
These three patterns — series, parallel, and conditional — cover the vast majority of complex personal workflows. Series composition handles sequential dependencies. Parallel composition handles independent concurrent work. Conditional composition handles branching logic. Most real-world complex processes combine all three. A quarterly planning process might run a data-gathering workflow and a stakeholder-interview workflow in parallel (parallel composition), feed both outputs into a synthesis workflow (series composition), produce a draft plan that is either approved or sent back for revision (conditional composition), and upon approval trigger an execution-planning workflow (series composition again). The complex process is a composition of simple patterns, and each pattern is built from simple, reusable workflows.
Intermediate outputs: the hidden benefit of composition
There is a practical benefit of workflow composition that is easy to overlook but that transforms how you work: every boundary between composed workflows is a preserved intermediate output. When the research workflow finishes, the structured notes exist as a concrete artifact. When the outlining workflow finishes, the outline exists as a concrete artifact. When the drafting workflow finishes, the rough manuscript exists as a concrete artifact.
This matters for three reasons. First, it means you can resume from any point. If you finish the research and outlining on Monday but run out of time for drafting, you pick up on Tuesday with a concrete artifact — the outline — rather than a hazy memory of where you were in a monolithic process. The intermediate output is a save point. Monolithic workflows do not have save points. They have "wherever I stopped," which is usually nowhere useful.
Second, intermediate outputs make failure cheap. If the drafting workflow produces something unusable, you have not lost the research or the outline. You re-run the drafting workflow from the preserved outline. In a monolith, a failure at the drafting stage often means starting over from scratch, because the earlier work was not captured in a reusable form.
Third, intermediate outputs enable reuse. The research notes you produced for one article might be useful for a different article. The outline you created for a blog post might serve, with modification, as the outline for a presentation. When intermediate outputs exist as independent artifacts, they become assets in your library — not locked inside a monolithic process but available for new compositions you have not yet imagined.
The Third Brain as composition engine
Large language models are composition machines. They accept an input, apply a transformation, and produce an output. The transformation can be anything expressible in language: summarize, translate, critique, expand, restructure, reformat. This makes them natural participants in composed workflows — not as replacements for your judgment but as sub-workflows in your pipeline.
Consider how AI fits into a composed content pipeline. The research workflow produces structured notes. You can insert an AI sub-workflow between research and outlining: "Given these notes, identify the three strongest themes and suggest a logical ordering." The AI's output is not the outline — it is a draft suggestion that feeds your outlining workflow. You can insert another AI sub-workflow between drafting and editing: "Given this rough draft, identify passages where the argument is unclear, where the evidence is thin, and where the tone is inconsistent with the rest." The AI's output is not the edited document — it is a diagnostic report that feeds your editing workflow.
The key is that each AI interaction is a composable sub-workflow with a defined input and a defined output. The input is the artifact from the previous stage plus a specific instruction. The output is a transformed artifact that feeds the next stage. The AI does not need to understand the full pipeline. It only needs to handle its own transformation — the same principle that makes Unix pipes and mathematical function composition work.
There is a more powerful application of this principle. Once your workflows are explicitly composed with documented interfaces, you can ask an AI to suggest new compositions. Describe three workflows from your library, along with their input and output specifications, and prompt: "What complex tasks could be accomplished by chaining these workflows, and in what order?" The AI can identify compositions you have not considered because it is pattern-matching on the interface specifications, not on your assumptions about what goes with what. The standardized interfaces are what make this possible. Without them, the AI has nothing structured to compose.
From composition to review
You now have the ability to build complex operational capability from simple parts. Your workflow library, which the previous lesson helped you organize, is no longer a flat collection of independent processes. It is a set of composable modules — small workflows with standardized interfaces that can be chained in series, run in parallel, and branched conditionally to handle work of arbitrary complexity.
But a composed system is a living system, and living systems require maintenance. The sub-workflows that compose your publishing pipeline were designed for the work you did six months ago. Your research method has evolved. Your editing standards have changed. The publishing platform you use has different requirements. Some sub-workflows are still earning their place. Others have become dead weight — steps that once served a purpose but now add friction without adding value.
The next lesson addresses this directly. The workflow review is a periodic examination of your entire workflow library — composed and standalone alike — to retire what no longer works, improve what does, and ensure that the compositions you have built still serve the work you are actually doing. A library that is never reviewed becomes a museum. A library that is reviewed regularly becomes a competitive advantage. The review is where you close the loop between design and practice, between what your workflows were built to do and what your work actually requires.
Sources:
- McIlroy, M. D. (1978). Unix time-sharing system: Foreword. The Bell System Technical Journal, 57(6), 1899-1904. (Unix philosophy: "Do one thing well" and compose through pipes.)
- Simon, H. A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106(6), 467-482. (Nearly-decomposable systems and hierarchical modularity.)
- Raymond, E. S. (2003). The Art of Unix Programming. Addison-Wesley. (Composability through standardized interfaces, the Rule of Composition.)
- Newman, S. (2021). Building Microservices. 2nd ed. O'Reilly Media. (Service composition, API contracts, and independent deployability.)
- Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), 1053-1058. (Information hiding and modular decomposition.)
- McConnell, S. (2004). Code Complete. 2nd ed. Microsoft Press. (Function composition, interface design, and the value of low coupling.)
- Deming, W. E. (1986). Out of the Crisis. MIT Center for Advanced Engineering Study. (Process thinking and system-level quality design.)
Frequently Asked Questions