Core Primitive
Start with the simplest version that works and add complexity only when needed.
You do not have a workflow problem. You have a starting problem.
By this point in Phase 41, you have accumulated serious conceptual machinery. You know that a workflow is a repeatable sequence of steps. You know how to document one. You understand triggers, atomic steps, the distinction between sequential and parallel execution, how to build checkpoints, and how to create reusable templates. Seven lessons of infrastructure, each one adding a layer of precision and sophistication to your understanding of personal process design.
And here is the danger: you now know enough to design a workflow so thorough, so well-structured, so comprehensively checkpointed and elegantly templated that you never actually run it.
This is not a hypothetical failure. It is the most common one. The person who reads about personal productivity systems and spends three weekends configuring the perfect task manager before entering a single task. The aspiring writer who researches outlining methods, drafting techniques, revision frameworks, and publishing workflows — and six months later has not written a paragraph. The knowledge worker who designs an elaborate weekly review process with twelve sections, four checklists, and a scoring rubric, then abandons it after one attempt because the review itself takes longer than the work it was supposed to review.
The previous seven lessons gave you the tools to build complex workflows. This lesson teaches you when not to use them — or more precisely, when to use them later. The principle is simple, but its implications run deep: start with the simplest version that works, and add complexity only when reality demands it.
Gall's Law and the birth of working systems
In 1975, John Gall published "Systemantics," a book that reads like satire but operates as serious systems theory. Among its many observations, one has achieved the status of a design law: "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
Gall was writing about institutional systems — bureaucracies, organizations, engineered infrastructure — but the principle applies with equal force to personal workflows. Every reliable complex workflow you encounter in the world was once a simple workflow that someone ran, observed, adjusted, and gradually expanded. The reverse path — designing a complex workflow on paper and expecting it to function on first execution — violates something fundamental about how working systems come into existence.
The reason is not mysterious. A complex workflow designed in advance embeds dozens of assumptions about how execution will actually unfold: which steps will take how long, what information will be available at each stage, where errors are likely to occur, which transitions will feel natural and which will create friction. Every one of these assumptions is a hypothesis. And hypotheses that have not been tested by reality are, statistically, mostly wrong. Not because you are a poor designer, but because reality contains more variables than any advance design can account for.
A simple workflow, by contrast, embeds very few assumptions. It makes contact with reality quickly. Its failures are visible immediately. Its successes are genuine — not theoretical but demonstrated. And each piece of complexity you add to it after that initial contact is informed by evidence rather than speculation. The complexity is earned, not assumed.
The Lean Startup principle, applied inward
Eric Ries published "The Lean Startup" in 2011, and while the book is aimed at technology entrepreneurs, its central concept — the Minimum Viable Product — articulates a principle that applies wherever someone is building something new under conditions of uncertainty.
A Minimum Viable Product is not a bad product. It is not a half-finished product. It is the smallest version of a product that generates real learning. The MVP exists not to impress but to inform. You build it, ship it, observe how it performs in contact with reality, and use that observation to guide your next iteration. The alternative — building the complete, polished, feature-rich version before anyone uses it — is what Ries calls the "build trap." You invest months or years into a product shaped by your assumptions, launch it, and discover that your assumptions were wrong in ways you could not have predicted from inside the design process.
Your personal workflows operate under the same conditions of uncertainty. When you design a workflow for a task you have never formalized, you do not actually know which steps are essential and which are ornamental. You do not know how long each step will take in practice. You do not know where the friction will appear. You do not know which transitions will feel natural and which will create resistance that tempts you to abandon the entire process. The only way to discover these things is to run the workflow. And the fastest way to run it is to build the minimum viable version — the smallest number of steps that produce a usable output — and let execution teach you what the design could not.
A minimum viable workflow for writing a weekly newsletter might be: decide topic, draft, publish. Three steps. No revision phase. No template. No editorial calendar. No social media cross-posting. Not because those things are unimportant, but because you do not yet know which of them matters most for your specific newsletter, your specific audience, your specific constraints. After four weeks of executing the three-step version, you will know. You will discover that your drafts consistently need a cooling period before you can see their problems, so you add a revision step. Or you will discover that choosing the topic takes longer than writing, so you add a weekly capture step to accumulate topic candidates. Each addition is a response to a real observation. Each addition solves a problem you actually experienced, not one you imagined.
YAGNI: the discipline of not building ahead
Extreme Programming, the software development methodology formulated by Kent Beck and others in the late 1990s, codified this principle under the acronym YAGNI: "You Aren't Gonna Need It." The principle states that you should not add functionality until you have a demonstrated requirement for it. Not a projected requirement. Not a "what if" requirement. A demonstrated one — a specific situation where the absence of that functionality caused a concrete problem.
YAGNI is harder to practice than it sounds, because the human mind is a speculation machine. When you sit down to design a workflow, your brain immediately generates a cascade of hypothetical scenarios: What if I need to handle an exception? What if the input varies? What if I want to share this with someone else? What if the context changes? Each scenario feels plausible. Each one suggests an additional step, an additional branch, an additional checkpoint. By the time you have addressed every hypothetical, your three-step workflow has become a fifteen-step workflow with conditional branches and edge case handling, and you have not executed it once.
The discipline of YAGNI is the discipline of tolerating incompleteness. It requires you to look at a workflow that you know is missing features and choose to run it anyway. Not because you do not care about quality, but because you understand that the features you add after execution will be better than the features you add before it. Pre-execution features are shaped by imagination. Post-execution features are shaped by evidence. Evidence wins.
Kent Beck captured the iterative philosophy in a sequence that has become foundational in software craft: "Make it work, make it right, make it fast." The order is not arbitrary. First, get something functioning — a version that produces output, however inelegant. Then, make it correct — restructure, refine, fix the parts that are clumsy or error-prone. Then, and only then, make it efficient — optimize for speed, reduce friction, automate where appropriate. People who design elaborate workflows before running them are trying to make it right and fast before they have made it work. They are optimizing a system that does not yet exist.
Premature optimization and the workflow designer's trap
Donald Knuth, the computer scientist whose multivolume "The Art of Computer Programming" is one of the field's foundational texts, observed that "premature optimization is the root of all evil." Knuth was writing about software performance — about programmers who spend hours shaving microseconds from code paths that run once a day while ignoring bottlenecks in code paths that run millions of times. The insight generalizes: optimizing before you know where the real problems are wastes effort on the wrong targets and creates complexity that obscures the genuine issues.
In workflow design, premature optimization takes recognizable forms. You create detailed templates before knowing which fields you will actually fill in. You build automation for steps you have not yet performed manually. You design elaborate filing systems for outputs you have not yet produced. You add checkpoints at every transition even though you do not yet know which transitions are failure-prone. Each of these actions feels productive — you are "building infrastructure" — but each one is a bet placed without information. And the aggregate effect is a workflow so laden with infrastructure that its weight exceeds its value.
The Pareto principle offers a useful lens here. Vilfredo Pareto's observation — that roughly 80 percent of effects come from 20 percent of causes — has been validated across domains far beyond the Italian land ownership data where Pareto first noticed it. Applied to workflow design, the principle suggests that a small subset of your workflow steps generates the vast majority of your workflow's value. The question is which subset. And you cannot answer that question from the design phase. You can only answer it from execution data. Which steps, when performed, most reliably produce the output you care about? Which steps, when skipped, cause the biggest problems? The answers will surprise you. They always do.
This is why the minimum viable workflow is not an inferior version of the "real" workflow. It is the instrument that reveals what the real workflow should be. The three-step version is not a compromise. It is a research tool. Every execution generates data. Every piece of data informs a design decision. Every design decision produces a workflow that is better adapted to your actual context than any workflow you could have designed in advance.
The anatomy of a minimum viable workflow
A minimum viable workflow has exactly three properties, and no more.
It has a trigger — a specific condition that activates it. Not a vague intention to "do this sometime" but a concrete cue: a time, an event, a threshold. The trigger does not need to be sophisticated. "After I pour my morning coffee" is a perfectly adequate trigger for a journaling workflow. Adequacy, not elegance, is the standard.
It has a sequence of the fewest steps that produce a usable output. "Usable" is the key adjective, not "optimal," not "polished," not "comprehensive." The output must be functional — something that serves its intended purpose, even if crudely. A first draft that captures your core argument, even if the prose is rough. A meal plan that covers five dinners, even if the recipes are simple. A meeting summary that captures decisions and action items, even if the formatting is inconsistent. Usable means you can act on the output. It does not mean you would be proud to frame it.
It has a completion criterion — a way to know when you are done. The criterion should be binary, not subjective. "Draft is written" is binary. "Draft is good" is subjective and invites the kind of recursive evaluation that turns a thirty-minute task into a three-hour perfectionism spiral. The minimum viable workflow terminates when the output exists, not when the output is perfect.
That is it. Trigger, steps, completion. No templates, no checkpoints, no automation, no exception handling, no conditional branches. Those can all come later. But "later" means after execution, after observation, after evidence. Not before.
The emotional resistance to simplicity
There is a psychological dimension to premature workflow complexity that deserves explicit attention, because understanding it makes the resistance easier to recognize and overcome.
Designing an elaborate workflow before running it feels like preparation. It feels responsible. It feels like the kind of thoughtful, thorough approach that a serious person would take. Running a three-step version feels careless by comparison — like you are not taking the task seriously, like you are cutting corners, like you are settling for mediocrity.
This feeling is wrong, but it is powerful. It draws energy from a legitimate value — the desire to do things well — and channels that value into a counterproductive behavior — designing instead of doing. The elaborate design session provides the emotional satisfaction of progress without the vulnerability of execution. You feel like you have accomplished something. You have a template. You have a plan. You have infrastructure. The fact that you have not actually produced any output is easily rationalized: you are investing in the system that will produce output later. The system just needs one more feature. One more refinement. One more pass.
The minimum viable workflow demands that you trade the comfort of preparation for the discomfort of imperfect action. It asks you to produce output that you know is below your eventual standard. It requires you to tolerate the gap between what you shipped and what you are capable of shipping. This gap is not evidence of laziness. It is the price of learning. And the learning it purchases — concrete knowledge of how your workflow actually behaves under real conditions — is worth far more than the psychological comfort of a beautiful plan that has never been tested.
Analysis paralysis in process design
There is a specific failure mode that afflicts people who have learned enough about workflow design to be dangerous but not yet enough to be effective. They understand triggers, atomic steps, checkpoints, templates, sequential versus parallel execution — all the concepts from the first seven lessons of this phase. And when they sit down to design a new workflow, they try to apply all of these concepts simultaneously, from the start.
The result is analysis paralysis. Every design decision opens a cascade of further decisions. Should the steps be sequential or parallel? Where should the checkpoints go? Which template should I use? What exceptions might arise? How should handoffs work? Each question is legitimate. Each one has a good answer. But trying to answer them all before the first execution turns the design process into an infinite recursion — every answer generates new questions, and the workflow grows in complexity without ever growing in actual execution count.
The minimum viable workflow breaks this recursion by imposing a constraint: you do not get to answer most of those questions yet. You get to answer three — what triggers this, what steps do I take, and when am I done. Everything else is deferred. Not abandoned. Deferred. You will return to checkpoints, templates, parallel execution, and exception handling after you have run the simple version and know which of those concepts solves a real problem in this specific workflow. The deferral is not negligence. It is discipline.
Your Third Brain: AI as simplification partner
The tendency to over-design workflows is so universal that having an external perspective during the design phase is genuinely valuable. AI systems are well-suited for this role — not because they are creative, but because they are dispassionate. They do not share your emotional attachment to comprehensiveness.
When you find yourself designing a workflow that is growing beyond five or six steps before its first execution, describe it to an AI assistant and ask a specific question: "Which of these steps can I remove and still produce a usable output?" The AI will identify the steps that are optimizations rather than essentials. It will distinguish between the steps that create the output and the steps that refine the output. It will help you see which steps are driven by actual requirements and which are driven by anxiety about hypothetical failures.
An AI can also serve as a valuable constraint enforcer. Tell it your goal and ask it to design a workflow with no more than three steps. The constraint will force the AI to prioritize ruthlessly, and the result — while almost certainly incomplete — will reveal the core of the workflow more clearly than an unconstrained design ever could. You can then compare the three-step version with your twelve-step version and see exactly where your complexity went. Much of it will turn out to be premature optimization.
After you run the minimum viable version, an AI becomes useful in a different way: as a pattern recognizer. Describe your execution experience — what worked, what failed, what felt awkward, what was missing — and ask the AI to suggest a single improvement for the next iteration. Not five improvements. One. The discipline of single-improvement iterations prevents the creep back toward over-design that naturally occurs when a workflow begins to stabilize. Each iteration adds one piece of earned complexity. The workflow grows organically, shaped by evidence, at a pace your execution can sustain.
From minimum viable to minimum effective
There is a subtle but important evolution that occurs after you have run a minimum viable workflow several times. The workflow begins to stabilize. You have added a step here, removed one there, adjusted the trigger, refined the completion criterion. At some point, you reach a version that is no longer the minimum viable — it is the minimum effective. The distinction matters.
The minimum viable workflow is the smallest version that produces any usable output. It exists to generate learning. The minimum effective workflow is the smallest version that reliably produces output at an acceptable standard. It exists to generate consistent results. The gap between the two is closed by iteration — by running the simple version, observing its failures, and adding precisely the complexity needed to address those failures.
The minimum effective workflow is your target state. Not the maximum possible workflow — the one with every checkpoint, every template, every conditional branch you could conceivably add. The minimum effective version. The version where every step earns its place by solving a problem you actually encountered. The version where removing any step would degrade the output below your acceptable standard, but adding any step would add complexity without proportional benefit.
This is where the Pareto principle resolves into practice. Your minimum effective workflow likely contains four to seven steps. Those steps address the 20 percent of potential workflow features that produce 80 percent of the value. The remaining 80 percent of features — the elaborate templates, the exhaustive checkpoints, the comprehensive exception handling — can be added if and when specific evidence demands them. Most of them will never be needed. That is not a failure of design. It is a success of restraint.
The bridge to bottlenecks
Once your minimum viable workflow is running — once you have executed it three times, five times, ten times — a new question becomes available to you. Not "What features should I add?" but "Which step is the slowest?"
This is the question that Workflow bottlenecks addresses: workflow bottlenecks. The concept of a bottleneck only makes sense in the context of a running process. You cannot identify the slowest step in a workflow that has never been executed. You can guess. Your guess will probably be wrong. The step you thought would be fast turns out to be slow. The step you thought would be the bottleneck turns out to be trivial. Reality does not conform to your advance predictions about where friction will appear.
The minimum viable workflow gives you the running process that makes bottleneck identification possible. It is the simplest instrument that can generate the data you need to make your next design decision. That is its purpose. Not to be the final workflow. Not to be the perfect workflow. To be the first workflow — the one that runs, the one that teaches, the one that evolves.
Start with three steps. Run them. Listen to what they tell you. Build from there.
Frequently Asked Questions