Core Primitive
Regularly review all active commitments to ensure they still deserve your resources.
You built the tools. You are not using them.
Over the last seventeen lessons, you assembled a complete commitment architecture. You learned that commitments need structure to survive (Commitment without structure fails). You installed pre-commitment devices and commitment devices (Pre-commitment eliminates in-the-moment choices, Commitment devices). You leveraged public accountability and written declarations (Public commitments create accountability, Written commitments outperform mental commitments). You created implementation intentions that automate execution (The implementation intention), stacked new behaviors onto existing ones (Commitment stacking), scoped commitments tightly enough to execute (Commitment scope matters), and built a budget to prevent overcommitment (The commitment budget). You examined the psychological patterns that drive chronic overcommitment (Overcommitment is a pattern not an accident), confronted the sunk cost trap (The sunk cost trap in commitments), defined exit criteria so you know when to leave (Commitment exit criteria), and established a renewal practice so commitments stay chosen rather than habitual (Renewing commitments deliberately). You explored the relationship between commitment and identity (Commitment and identity), learned to start small with micro-commitments (Micro-commitments for big goals), built rituals that reinforce your commitment architecture (Commitment rituals), and developed a protocol for recovering when commitments break (Recovery from broken commitments).
That is seventeen tools. And here is the uncomfortable truth: without a recurring practice that activates all of them, most of those tools are sitting in your cognitive toolbox unused. You built the budget but you are not checking it. You wrote the exit criteria but you are not reviewing them. You know about the renewal question but you are not asking it. The tools exist. The practice of using them does not.
This lesson is about that practice. The commitment review is the operational heartbeat of your commitment architecture — a regular, structured session where you audit your entire commitment portfolio against the standards you have already established. It is to your commitment system what a weekly financial review is to a household budget, what a code review is to a software codebase, or what a GTD weekly review is to a task management system. Without it, entropy wins. Commitments drift, decay, bloat, and multiply while you look the other way.
Why periodic review is a structural necessity, not a personal preference
The case for regular review is not motivational. It is structural. Your commitment landscape changes continuously, and any system that manages a changing landscape without periodic reassessment will drift into incoherence.
Peter Drucker articulated this in his concept of "systematic abandonment," described across several works including Management: Tasks, Responsibilities, Practices (1973). Drucker argued that every organization should periodically ask of every process, product, and policy: "If we were not already doing this, would we start it now?" This is the zero-based question from Renewing commitments deliberately, but Drucker was making a deeper structural point: the question needs to be asked on a schedule, not as a one-time exercise. Without a schedule, the question never gets asked, because the status quo generates no triggers for reassessment. The commitment that is slowly decaying does not announce itself. It does not send you an alert when it passes from "worth keeping" to "not worth the cost." It just sits there, consuming resources, until either a crisis forces your hand or you review it deliberately.
David Allen's Getting Things Done methodology includes a weekly review as one of its five core phases, and Allen is unequivocal about its importance: the weekly review is "the critical success factor" of the entire system. In Getting Things Done (2001, revised 2015), he writes that the review is where you recapture control — where you move from reacting to what is in front of you to proactively managing your commitments from a position of perspective. Allen's observation is that people who adopt GTD and skip the weekly review get roughly 20 percent of the system's value. The review is not a nice-to-have add-on. It is the mechanism that makes all the other components function as a system rather than a collection of disconnected techniques.
The same principle holds for your commitment architecture. Without periodic review, each tool operates in isolation. You might check your budget occasionally. You might glance at your exit criteria when something goes obviously wrong. You might ask the renewal question about one or two commitments when the mood strikes. But you will not do all of these things, for all of your commitments, in a single session that gives you a complete picture of your portfolio. And that complete picture is exactly what you need, because commitments interact. Adding one commitment changes the budget available for every other commitment. A scope drift in one area reallocates cognitive resources away from another area. An unacknowledged exit criterion in one domain leaks anxiety that degrades performance across all domains.
The commitment review is the practice that surfaces these interactions. It forces you to look at the whole portfolio, not just the commitment that is loudest or most urgent at the moment.
The architecture of a commitment review
A commitment review is not a vague "reflect on your commitments" exercise. It is a structured protocol with specific inputs, specific questions, and specific outputs. Here is the architecture.
Inputs. You need three things before you begin. First, your complete commitment inventory — every active commitment across every domain of your life. If you built this during The commitment budget, update it. If you did not, build it now. Second, your exit criteria — the predefined conditions from Commitment exit criteria that define when a commitment should end. Third, your commitment budget — the capacity model from The commitment budget that defines how much you can sustain. These three documents form the information substrate of the review. Without them, you are reviewing from memory, which means you are reviewing from a distorted, incomplete, and optimistically biased representation of reality.
The five review questions. For each commitment in your inventory, ask these five questions in order:
Is this commitment still within scope? Compare the commitment as it currently operates against the five dimensions you defined in Commitment scope matters: trigger, behavior, threshold, context, and horizon. Commitments scope-creep silently. A weekly meeting that was supposed to be thirty minutes is now an hour. A writing practice that was "500 words before work" has become "produce a polished blog post." A health commitment that was "walk 20 minutes at lunch" has expanded to include diet tracking, supplement research, and workout program evaluation. Scope drift is the most common form of commitment decay, and it is invisible without deliberate inspection.
Is this commitment within budget? Does it fit within your current capacity — not your peak capacity, but your realistic capacity given this week's actual circumstances? Cross-reference against your commitment budget. If your total portfolio exceeds your budget, this commitment is contributing to the overspend. That does not mean it gets cut automatically. It means the overspend must be resolved, and this commitment is a candidate for reduction, deferral, or release.
Have any exit criteria been triggered? Check the specific, observable conditions you defined in Commitment exit criteria. This is the question most people skip, because confronting a triggered exit criterion is emotionally uncomfortable. But the exit criteria exist precisely for moments like this — moments when your in-the-moment self would prefer to look away. If a criterion is met, the review is where you acknowledge it and initiate the appropriate response.
Would I re-enter this commitment today? This is the zero-based renewal question from Renewing commitments deliberately. Knowing what you know now — about yourself, about the commitment, about what else you could do with these resources — would you start this commitment if you were not already in it? If yes, the commitment is renewed. If no, it is flagged for release. If partially, it is flagged for renegotiation.
How did this commitment enter my portfolio? This is the pattern-awareness question from Overcommitment is a pattern not an accident. Did this commitment arrive through deliberate choice, or through people-pleasing, FOMO, identity attachment to busyness, the planning fallacy, or the future-time slack illusion? Commitments that entered through a pattern rather than a decision deserve extra scrutiny during review, because the same pattern that put them there will resist letting them go.
Outputs. The review should produce three concrete outputs. A list of commitments that are confirmed and renewed — you are keeping them, with fresh intention. A list of commitments flagged for action — these need to be renegotiated, deferred, released, or re-scoped. And an updated budget assessment — a clear picture of whether you are within capacity or running a deficit, and what needs to change if the answer is the latter.
The cadence question: how often to review
The optimal review frequency depends on the volatility of your commitment landscape, but the research on self-regulation and monitoring offers useful guidance.
Albert Bandura's social cognitive theory emphasizes self-monitoring as one of three core processes of self-regulation, alongside self-evaluation and self-reaction. In his 1991 paper "Social Cognitive Theory of Self-Regulation," Bandura argued that self-regulation fails without systematic self-observation — you cannot manage what you do not track. The frequency of monitoring needs to be high enough to detect drift before it becomes entrenched, but low enough to avoid monitoring fatigue.
For most people, a weekly cadence is the right starting point. This aligns with Allen's GTD weekly review, with the natural weekly rhythm of work and rest, and with the practical reality that most commitments operate on a weekly cycle — things you do daily, things you do on certain days, things with weekly deadlines. A weekly review is frequent enough to catch scope drift, budget overruns, and triggered exit criteria before they compound into crises. And it is infrequent enough that the review itself does not become an oppressive meta-commitment.
There are exceptions. If you are in a period of intense change — starting a new job, ending a relationship, recovering from illness, launching a project — a daily five-minute check-in on your three highest-priority commitments may be appropriate, with the full portfolio review remaining weekly. If your commitment landscape is extremely stable — the same five well-scoped commitments running on strong structural supports — you might extend to biweekly or even monthly. But if you have never done a commitment review before, start weekly. You will be surprised how much drift accumulates in seven days.
The cadence itself should be treated as a commitment, which means it needs the full architectural treatment: a specific trigger (Sunday at 4 PM, Friday after the last meeting, Saturday morning with coffee), a specific behavior (open the commitment document, run the five questions), a time threshold (30-60 minutes), a context (the same desk, the same chair, the same playlist if that helps), and a horizon (commit to the weekly cadence for 8 weeks before evaluating whether to adjust).
What makes reviews fail
Having studied the structure of an effective review, it is worth examining how reviews break down — because the failure modes are predictable and preventable.
The rubber stamp. You go through the motions without genuine interrogation. Every commitment gets a quick "yep, still good" and the review is done in ten minutes. This happens when the review becomes a checkbox rather than an inquiry. The antidote is to require yourself to write a one-sentence justification for every commitment you keep. If you cannot articulate why you are choosing this commitment right now, you are rubber-stamping rather than reviewing.
The guilt spiral. You discover that you are over budget, failing at multiple commitments, and carrying several that should have been released months ago. Instead of treating this as useful data, you collapse into self-recrimination. The review becomes evidence of your inadequacy rather than a tool for correction. This connects directly to the failure mode identified in Overcommitment is a pattern not an accident: diagnosis without intervention is just more sophisticated suffering. The review's purpose is to produce action items, not emotional verdicts. If guilt is the primary output, the review is malfunctioning.
Scope avoidance. You review some commitments but skip the ones that would be most uncomfortable to examine. The failing relationship, the stalled project, the role you hate — these are exactly the commitments that need review most urgently, and they are exactly the ones your brain will try to skip. One structural defense is to review commitments in a fixed order — alphabetical, by domain, or by the order they appear in your inventory — so that you cannot selectively avoid the hard ones.
Review without follow-through. The review surfaces three commitments that need action. You note them. You close the document. You do nothing. By next week's review, the same three commitments appear again, unchanged. This is the commitment equivalent of reading about exercise but never going to the gym. The review must include a commitment to act on its outputs — specifically, a deadline by which each flagged commitment will be addressed. A review that produces insights but not actions is an intellectual exercise, not a management practice.
The compound effect of consistent review
The first few commitment reviews are the hardest. Your inventory is messy, your exit criteria may be incomplete, and the review surfaces uncomfortable truths about commitments you have been avoiding. The sessions may run long. The emotional cost may feel disproportionate to the benefit.
But review, like any well-scoped commitment, compounds. Roy Baumeister and colleagues' research on self-regulation as a learnable skill suggests that the monitoring capacity itself strengthens with practice. The first review requires substantial conscious effort. By the tenth, the five questions are automatic — you run them without deliberation, the way a pilot runs a preflight checklist. By the twentieth, the review has reshaped how you encounter new commitment opportunities in the first place. You find yourself asking the five questions before you say yes, not just after.
This is the phase transition that consistent review produces: you shift from reactive management to proactive design. Instead of periodically auditing a portfolio that grew haphazardly, you begin curating a portfolio that reflects your actual priorities. Each review makes the next one faster and more precise, because the portfolio gets cleaner with each pass. Zombie commitments get released. Scope-drifted commitments get re-scoped. Budget overruns get resolved. The system converges toward a state where every active commitment has been deliberately chosen within the last review cycle.
The research on implementation intentions supports this compound effect. Gollwitzer's meta-analysis (2006, with Sheeran) found that implementation intentions have a medium-to-large effect on goal attainment (d = 0.65). When you set a specific implementation intention for your review — "Every Sunday at 4 PM, I will sit at my desk and open my commitment document" — you are applying the same mechanism to your meta-practice that you have been applying to your individual commitments. The review becomes as automatic as any other well-structured behavior in your system.
The financial audit analogy and why it holds
The analogy between a commitment review and a financial audit is not just illustrative — it is structurally precise, and understanding the parallels clarifies what a commitment review must accomplish.
A financial audit serves four functions. It verifies that the books match reality — that what you think you own and owe is what you actually own and owe. It identifies discrepancies — expenses you forgot about, income that never arrived, accounts that are hemorrhaging value. It enforces accountability — when someone is looking at the numbers regularly, careless spending decreases. And it informs future allocation — you cannot make good investment decisions without accurate data about your current position.
Your commitment review performs the same four functions. It verifies your commitment inventory against reality — you check whether you are actually doing what you say you are doing. It identifies discrepancies — commitments you are failing at, exit criteria you are ignoring, budget overruns you have been rationalizing. It enforces accountability — the knowledge that you will confront your entire portfolio next Sunday changes how you make commitment decisions during the week. And it informs future allocation — you cannot decide whether to take on a new commitment without accurate data about your current capacity.
Nobody runs a business without financial audits. Nobody manages a portfolio without performance reviews. Yet most people manage their personal commitment portfolio — which is arguably the most consequential portfolio they hold, because it determines how they spend their finite life — with zero systematic review. The commitment review closes this gap.
What changes when AI enters the review
AI tools transform the commitment review from a purely manual exercise into a hybrid system where the bookkeeping is automated and the judgment remains yours.
The most valuable AI role in a commitment review is pattern detection across time. A single review shows you the current state. A series of reviews, tracked in a shared document that your AI system can access, reveals trends. The AI can tell you that a specific commitment has been flagged "renegotiate" for three consecutive reviews without any renegotiation actually occurring. It can detect that your budget utilization spikes every September (back-to-school chaos, Q3 deadlines, whatever the pattern is for your life). It can notice that commitments in your creative domain are chronically under-resourced while commitments in your professional domain absorb everything — a structural misallocation that is invisible in any single review but obvious across a quarter of data.
You can also use AI to prepare for the review by pre-populating the five questions with data you have been logging. If you track your time, the AI can calculate actual hours spent per commitment versus budgeted hours. If you journal, it can surface entries where you expressed frustration, energy drain, or conflict related to specific commitments. The review session itself becomes faster and more focused, because the diagnostic data is already assembled when you sit down.
But the judgment calls — whether to keep, renegotiate, or release a commitment — must remain with you. The AI has no values. It has no felt sense of what matters. It cannot tell you whether the board seat that is draining your energy is worth keeping because the mission aligns with your deepest purpose. It can tell you that you have described it as "draining" in four out of your last six journal entries. What you do with that information is a human decision, informed by data but driven by judgment.
The ideal division of labor: let the AI manage the information architecture — inventory tracking, budget calculations, pattern detection, trend analysis. Reserve the five review questions for yourself. The questions are where sovereignty lives. They are the mechanism through which you maintain authorship of your commitment portfolio rather than letting it be authored by inertia, guilt, and habit.
From review to alignment
The commitment review you have now built does something more than maintain hygiene. It generates a dataset about what you actually value — not what you say you value, not what you think you should value, but what you consistently choose to invest your finite resources in when you are thinking clearly and looking at the complete picture.
Over multiple review cycles, a pattern emerges. Some commitments survive every review with strong justifications. They are easy to renew, they fit the budget naturally, their scope stays clean, and they never trigger exit criteria. These are your high-signal commitments — the ones that reveal what genuinely matters to you. Other commitments require constant renegotiation, chronic budget accommodation, and elaborate justifications to survive each review. These are the commitments where there is friction between what you are doing and what you care about.
That friction is the subject of the next lesson: the alignment between commitments and values (Alignment between commitments and values). The commitment review generates the raw data. The alignment analysis interprets it. Together, they form the feedback loop that ensures your commitment architecture is not just well-managed but well-directed — pointed toward the things that matter most to the person you are becoming, not the person you were when you first said yes.
Your commitment portfolio is the most honest autobiography you will ever write. It records where your hours go, what occupies your mind, and what you choose when you have to choose. The review is where you read that autobiography regularly enough to ensure it is telling the story you intend.
Frequently Asked Questions