Core Primitive
Some steps must happen in order while others can happen simultaneously.
You are probably doing things in order that do not need to be in order
Every workflow you have ever designed carries a hidden assumption. When you write a list of steps — step one, step two, step three — you are implying a sequence. The format itself suggests that step two cannot begin until step one is finished, that step three waits for step two. And in some workflows, this implication is correct. You cannot edit a document that has not been drafted. You cannot ship a package that has not been packed.
But in most workflows, the majority of steps have no actual dependency on the step that precedes them in the list. They appear sequential because lists are sequential. They feel sequential because you have always done them that way. And they remain sequential because you have never stopped to ask the only question that matters: does this step require the completed output of a previous step?
The previous lesson established that workflow steps should be atomic — small enough to complete without ambiguity. This lesson asks the next question: once you have a set of atomic steps, which ones must happen in a fixed order, and which ones can happen at the same time? The answer determines how fast your workflow can possibly run, and most people have never asked the question at all.
The dependency question
The distinction between sequential and parallel steps reduces to a single test. For any two steps in a workflow, ask: does step B need something that step A produces? If the answer is yes, the steps are sequential — B must wait for A. If the answer is no, the steps are parallel — they can happen simultaneously without interfering with each other.
This sounds trivially simple. It is trivially simple. And yet the failure to apply this test is one of the most common sources of wasted time in personal and professional workflows. Consider a morning routine: wake up, make coffee, shower, get dressed, eat breakfast, check email, review daily plan. Most people execute these steps in a strict linear sequence. But which of these steps actually depend on the one before them? The coffee machine does not need you to stand in front of it while it brews. The shower does not require the coffee to be finished. Email does not depend on breakfast. The only genuine dependencies might be "get dressed after showering" and "eat breakfast after making it." Everything else can overlap.
The dependency test is not about whether steps can theoretically be parallelized. It is about whether the output of one step is a required input to another. When you map this honestly, most workflows collapse from long sequential chains into short parallel bursts followed by brief sequential junctions. The total time drops — sometimes dramatically — not because you work faster, but because you stop waiting for things that were never actually blocking you.
How DuPont found the longest chain
In 1957, engineers at DuPont and the Remington Rand corporation faced a concrete version of this problem. They were planning the construction and maintenance schedules for chemical plants — projects involving thousands of individual tasks, many of which could happen simultaneously and some of which could not. They needed a method for determining, given all the dependencies between tasks, how long the entire project would take at minimum.
The method they developed is now called the Critical Path Method, or CPM. The insight is deceptively simple: in any network of tasks with dependencies, there is one specific chain of sequential tasks — a path through the network — that determines the minimum possible completion time for the entire project. This chain is called the critical path. Every other task in the project can be delayed, rescheduled, or slowed down without affecting the total duration, as long as the tasks on the critical path stay on schedule.
The critical path is the longest chain of dependent steps. Not the longest in terms of number of steps, but the longest in terms of total time. If your project has three parallel tracks — one taking ten days, one taking six, and one taking eight — the ten-day track is the critical path. The six-day and eight-day tracks have "float" — they can start later or take longer without extending the project. The ten-day track has zero float. Any delay on the critical path delays the entire project by exactly that amount.
This reframes the entire question of workflow optimization. You do not speed up a workflow by making every step faster. You speed it up by shortening the critical path — the specific chain of sequential dependencies that sets the minimum time. Making a non-critical step faster accomplishes nothing for the overall timeline. It gives you efficiency in a place where efficiency does not matter. This is one of the most counterintuitive insights in workflow design, and one of the most important.
The Polaris missile and the birth of dependency mapping
A year after CPM was developed for chemical plants, the United States Navy faced an even more complex version of the same problem. The Polaris missile program — one of the most ambitious engineering projects of the Cold War — involved thousands of contractors, tens of thousands of tasks, and dependencies so intricate that no one person could hold the entire plan in their head. The Navy needed a way to visualize which tasks could run in parallel, which tasks had to run in sequence, and where the critical dependencies lived.
The result was PERT — the Program Evaluation and Review Technique — developed in 1958 by the Navy Special Projects Office in collaboration with the consulting firm Booz Allen Hamilton. PERT charts map every task as a node in a network, with arrows showing dependencies. Parallel tasks appear as separate branches that diverge from a common predecessor and converge at a common successor. Sequential tasks appear as a single chain of arrows.
PERT introduced something that CPM had not emphasized: uncertainty. Where CPM assumed each task had a known, fixed duration, PERT acknowledged that durations are estimates, and estimates are probabilistic. Each task in a PERT chart has three time estimates — optimistic, most likely, and pessimistic — and the method calculates expected durations and variances. This was critical for the Polaris program, where many tasks had never been done before and their durations were genuinely unknown.
The Polaris program is often credited as completing two years ahead of its original schedule, though historians debate how much of that acceleration was due to PERT itself versus other management changes. What is not debated is that the explicit mapping of sequential and parallel dependencies — making visible what had previously been implicit — changed how large-scale projects were managed. Before PERT, project managers managed tasks. After PERT, they managed dependencies.
Henry Gantt had pioneered the visual representation of task timelines decades earlier. Gantt charts, developed around 1910 for use in shipbuilding and industrial production, plot tasks along a horizontal timeline with bars showing duration and position. A Gantt chart makes sequential and parallel relationships visible at a glance: tasks stacked vertically at the same horizontal position are parallel; tasks linked end-to-start are sequential. The Gantt chart remains, a century later, the most widely used tool for making the sequential-versus-parallel structure of a workflow explicit rather than assumed.
Amdahl's Law: the sequential portion sets the ceiling
In 1967, computer scientist Gene Amdahl formalized a principle that the project managers at DuPont and the Navy had discovered empirically. Amdahl was studying parallel computing — the practice of splitting a computation across multiple processors to make it run faster. His finding, now known as Amdahl's Law, states that the maximum speedup you can achieve by adding parallel capacity is limited by the fraction of the work that must remain sequential.
The math is stark. If 50% of your workflow is inherently sequential — steps that must happen in order, no matter how many resources you throw at the parallel portions — then the maximum speedup from perfect parallelization of the other 50% is a factor of two. Not ten. Not a hundred. Two. Even with infinite parallel capacity, you can never make the workflow run faster than the sequential portion allows.
This means that identifying and reducing the sequential portion of a workflow produces far more value than optimizing the parallel portions. If you can convert a step from sequential to parallel — by removing a dependency, by restructuring so that two steps no longer share an input-output relationship — you lower the floor on possible completion time. If you merely make a parallel step faster, you are optimizing within a ceiling that has already been set by the sequential chain.
Amdahl's Law was derived for computer processors, but it applies with full force to personal and professional workflows. Your morning routine, your content creation process, your project launch sequence — each of these has a sequential portion and a parallel portion. The sequential portion determines the minimum time the workflow can take. No amount of efficiency on the parallel tasks will breach that floor. The only way to make the whole workflow faster is to shorten the sequential chain.
Goldratt's bottleneck: the constraint that governs throughput
Eliyahu Goldratt's "The Goal," published in 1984, approached the same insight from the perspective of manufacturing. Goldratt's Theory of Constraints holds that every system has one constraint — one bottleneck — that limits the throughput of the entire system. Improving the capacity of any non-bottleneck resource accomplishes nothing for overall throughput. The system cannot produce faster than its slowest sequential link allows.
Goldratt's contribution was making this principle operational. His "Five Focusing Steps" provide a protocol: identify the constraint, exploit the constraint (get maximum throughput from it without new investment), subordinate everything else to the constraint (adjust all other processes to support the bottleneck), elevate the constraint (invest in increasing its capacity), and then repeat — because once you remove one bottleneck, a new one becomes the constraint.
For personal workflows, the bottleneck is usually a sequential step that you have never examined. The weekly report that cannot begin until three people send you their data. The creative review that cannot happen until you are in the right headspace. The approval step that depends on one person's schedule. These sequential constraints set the pace of the entire workflow, and everything else — no matter how efficient — is waiting on them.
The practical response is not to make everything parallel. Some things genuinely require sequential execution. The response is to know exactly which steps are on the sequential chain, to treat those steps as the binding constraint on the entire workflow's speed, and to invest your optimization effort there rather than on steps that have float.
Mapping your own workflows
The technique for applying this to personal workflows is straightforward, even if it requires discipline. Take any repeated workflow — a weekly review, a content creation pipeline, a project kickoff process — and write out every atomic step. Then, for every pair of steps, ask the dependency question: does step B require something that step A produces?
Draw the result as a network. Steps with dependencies get arrows between them. Steps without dependencies sit on parallel tracks. The longest chain of connected steps is your critical path. Everything not on the critical path can be rearranged, delayed, or parallelized without affecting total duration.
Most people who do this exercise for the first time are surprised by two things. First, the number of steps that have no actual dependency on any other step. They were performed sequentially out of habit, not necessity. A writer who always outlines, then researches, then drafts, then edits may discover that research and outlining can happen in parallel — they inform each other, but neither strictly requires the other's completed output. A manager who always reviews email, then checks the project dashboard, then writes the standup update may realize that the dashboard check and the email review are completely independent activities.
Second, people are surprised by how short the true critical path is relative to the total number of steps. A twenty-step workflow might have a critical path of only six steps, meaning fourteen of the twenty steps can float freely around the schedule without affecting total time. This does not mean those fourteen steps are unimportant. It means they are not the constraint. Optimizing them further will not make the workflow faster.
The mistake of false parallelism
The opposite error is equally destructive. Not everything that appears independent is truly independent. Two steps may share no formal input-output relationship but still interfere with each other when performed simultaneously.
Cognitive tasks are especially prone to this problem. You might determine that "write the introduction" and "design the data visualizations" have no dependency — neither requires the other's output. But if both tasks require your full creative attention, they cannot run in parallel within a single mind. The constraint is not the task dependency but the resource dependency — both tasks compete for the same cognitive bandwidth.
This is the difference between task parallelism and resource parallelism. Task parallelism asks whether two steps can logically happen at the same time. Resource parallelism asks whether you have the capacity to execute them at the same time. A step that requires physical presence cannot run in parallel with another step that also requires physical presence in a different location. Two deep-thinking tasks cannot run in parallel within a single brain.
The practical implication is that dependency mapping must account for both types of constraints. When you map your workflow, draw arrows not only for input-output dependencies but also for resource dependencies. Two steps that share a scarce resource — your attention, a specific tool, a collaborator's time — are effectively sequential even if their outputs are unrelated.
Why defaults are almost always too sequential
Human beings have a strong cognitive bias toward sequential thinking. When we imagine a process, we imagine it as a story — and stories have a beginning, a middle, and an end. Events happen one after another. This narrative structure is useful for communication and comprehension, but it is a poor model for execution. Most real-world processes are networks, not lines. Events do not happen one after another because they must. They happen one after another because we defaulted to listing them that way and never questioned the sequence.
This bias is reinforced by every tool we use for planning. To-do lists are linear. Numbered instructions are linear. Even most project management software defaults to a linear task list, requiring explicit effort to define parallel tracks and dependencies. The tool shapes the thinking: if your planning tool shows you a list, you plan a sequence. If your planning tool showed you a network, you would plan a network.
The correction is to treat "sequential by default" as a workflow smell — a signal that the workflow has not been analyzed for dependencies. Every time you find yourself listing steps one through ten, stop and ask: which of these actually depend on which? The answer, more often than not, is that steps two through four can happen in parallel, steps five and six are genuinely sequential, and steps seven through ten can happen in any order. The list made them look like a chain. The dependency analysis reveals they are a web.
The third brain: AI and dependency analysis
AI tools introduce a new dimension to the sequential-versus-parallel question. A language model can perform certain types of cognitive work — research, summarization, drafting, formatting — in what is effectively parallel to your own cognitive work. While you are analyzing data, an AI can be drafting the boilerplate sections of a report. While you are in a meeting, an AI can be processing the notes from the previous meeting. The dependencies that matter are between your judgment and the AI's output, not between the AI's tasks and your tasks.
This means that workflows augmented by AI have a fundamentally different dependency structure than workflows performed by a single person. Steps that were sequential because they competed for your attention — "think about the strategy, then write the email, then format the presentation" — can be partially parallelized by delegating the mechanical components to an AI while you focus on the components that require your judgment.
But Amdahl's Law still applies. The sequential portion of the workflow — the part that requires your irreplaceable judgment, your presence, your decision — still sets the ceiling. AI can compress the parallel portions, sometimes dramatically. It cannot compress the sequential portions, because those portions are sequential precisely because they depend on something only you can produce. The strategic decision that must precede the implementation. The creative vision that must precede the execution. The ethical judgment that must precede the action.
The sovereign use of AI in workflow design is to map your dependencies honestly, identify which steps are sequential because they require your judgment and which are sequential only because you lacked the capacity to parallelize them, and then use AI to parallelize the latter category while preserving the former. The workflow becomes faster without becoming less yours.
From sequence to network
The shift from sequential to parallel thinking is a shift from seeing your workflow as a line to seeing it as a network. In a line, everything waits for everything else. In a network, only the genuinely dependent steps wait, and everything else moves forward simultaneously. The total time drops to the length of the critical path, not the sum of all steps.
This shift requires one uncomfortable admission: you have been wasting time. Not by being lazy, but by being sequential when you did not need to be. Every workflow you have ever run as a strict sequence, when it contained parallel-capable steps, took longer than it needed to. The habit of sequentiality is so deep that most people do not even recognize it as a choice. It feels like "the way things are done."
It is not. It is a default that can be overridden by a single question — does this step require the output of a previous step? — applied with honesty and rigor to every workflow you own.
The next lesson introduces workflow checkpoints — the points where you stop, verify, and decide whether to continue. Checkpoints are most critical at exactly the places where this lesson's analysis matters most: the junctions where parallel tracks converge, where the outputs of independent steps must be integrated, and where an error in one track can propagate into the combined result. You cannot design checkpoints well until you know where your parallel tracks are. And you cannot know where your parallel tracks are until you ask the dependency question.
Frequently Asked Questions