Core Primitive
Document your tool configurations and workflows so you can recreate your setup.
You will forget how you set this up
Picture the moment clearly. You are staring at a fresh operating system installation. The desktop is default blue. The dock has the factory apps. Your terminal is vanilla. Your text editor is unconfigured. Everything you spent months — possibly years — tuning, adjusting, and perfecting is gone. Not deleted. Just inaccessible, because it lived in the configuration files of a machine that no longer boots.
This moment arrives for everyone. A hard drive fails. A laptop is stolen. A company issues you a new machine. You upgrade your operating system and something breaks irreparably. The trigger varies, but the experience is universal: you sit down to rebuild your working environment and realize you cannot remember how you built it in the first place. The shortcuts you relied on, the plugins that made your workflow seamless, the settings you changed three months ago after reading a blog post whose URL you cannot recall — all of it needs to be reconstructed from memory. And memory, as Hermann Ebbinghaus demonstrated in 1885, is a profoundly unreliable storage medium.
Ebbinghaus's forgetting curve — one of the most replicated findings in experimental psychology — shows that without deliberate review, we lose roughly fifty percent of newly learned information within one hour, seventy percent within twenty-four hours, and ninety percent within a week. You changed that obscure setting in your note-taking app six weeks ago. The probability that you can recall both what you changed and why you changed it is functionally zero. "I will remember this" is not a plan. It is a prediction, and it is almost always wrong.
Documentation as cognitive infrastructure
The primitive of this lesson is deceptively simple: document your tool configurations and workflows so you can recreate your setup. But the concept underneath is richer than it appears. What you are really doing when you document your own configurations is building a layer of cognitive infrastructure — an external memory system that compensates for the biological limitations of the one in your skull.
Evan Risko and Sam Gilbert, cognitive scientists at the University of Waterloo and University College London respectively, have studied what they call "cognitive offloading" — the practice of using external tools and artifacts to reduce the demands on internal cognitive resources. Their 2016 paper in Trends in Cognitive Sciences synthesized decades of research showing that humans routinely and beneficially offload memory, computation, and attention to the environment. Writing a shopping list is cognitive offloading. Setting a calendar reminder is cognitive offloading. And documenting your tool configurations is cognitive offloading of a particularly valuable kind, because the information you are externalizing is both complex (dozens of interdependent settings) and infrequently accessed (you only need it during setup or recovery, which might happen once a year). This is precisely the category of information that internal memory handles worst: detailed, technical, and rarely rehearsed.
Daniel Dennett, the philosopher of mind, took this further with his concept of cognitive tools — the idea that human intelligence is not confined to the brain but extends into the artifacts we create and maintain. In Dennett's framework, a well-maintained documentation file is not merely a backup of information that also exists in your head. It is a functional extension of your cognition. It performs a job that your biological memory cannot reliably perform, and it does so with perfect fidelity across time. Your memory of how you configured your text editor degrades every day. The document does not.
This reframing matters because it changes documentation from an optional nice-to-have into a structural component of your cognitive system. You would not consider your note-taking setup complete without a place to store notes. You should not consider your tool configuration complete without a place to store the documentation of that configuration. The documentation is not about the tool. The documentation is part of the tool — the part that makes the configuration persistent, shareable, and recoverable.
Donald Knuth, the computer scientist who created TeX and wrote The Art of Computer Programming, articulated a related principle in the 1980s with his concept of literate programming. Knuth argued that programs should be written as documents intended to be read by humans, with the executable code embedded inside the explanation. The code and its documentation should be a single artifact, not two separate files that drift apart over time. Knuth was talking about software, but the principle applies directly to your personal tool configurations: the configuration and the explanation of the configuration belong together. When they are separated — settings in one place, reasoning in another (or nowhere) — the reasoning is the first thing lost.
A brief history of documenting your own setup
The practice of personal tool documentation has a surprisingly rich lineage in software culture, and understanding where it came from illuminates why it works.
In the Unix tradition, user configurations have always lived in plain text files — the so-called "dotfiles," named for the period that precedes their filenames (.bashrc, .vimrc, .gitconfig). These files are human-readable, human-editable, and portable. You can copy them from one machine to another and instantly replicate your environment. In the early days of Unix, sharing dotfiles was a form of mentorship: senior developers would hand their .vimrc to junior developers, and the configurations contained not just settings but implicit lessons about how an expert thought about their workflow.
Around 2010, this practice formalized into the "dotfiles movement" on GitHub. Developers began publishing their configuration files in public repositories, version-controlled with Git, often accompanied by installation scripts that could set up an entire development environment on a fresh machine in minutes. Zach Holman's dotfiles repository, published in 2011, became one of the most-forked repositories on GitHub and established a template that thousands of developers adopted. Mathias Bynens's dotfiles, Ryan Bates's dotfiles, and Thoughtbot's dotfiles each attracted thousands of stars and forks. The practice spread because it solved a real and recurring problem: the knowledge of how to configure a productive working environment was too valuable and too complex to trust to memory alone.
The DevOps movement extended this principle from individual workstations to entire server infrastructures. "Infrastructure as Code" — the practice of defining server configurations in version-controlled scripts rather than manually setting up machines — became a professional standard through tools like Ansible, Puppet, Chef, and Terraform. The logic was identical to the personal dotfiles movement, just applied at scale: if your infrastructure exists only as manual configurations on running machines, it is fragile, irreproducible, and one failure away from catastrophe. If it exists as code — documented, version-controlled, testable — it is resilient, reproducible, and recoverable.
Atul Gawande, the surgeon and writer, made a parallel argument in The Checklist Manifesto (2009), drawing on research from aviation and medicine. Complex procedures — whether surgical operations, aircraft preflight checks, or emergency responses — fail when they depend on practitioners remembering every step correctly every time. They succeed when the steps are externalized into checklists that can be followed reliably regardless of memory, fatigue, or stress. Your tool setup is a complex procedure. It has dozens of steps, many of which are interdependent. Documenting it is creating a checklist for your future self — a self who will be stressed (the old machine just died), distracted (work is piling up while the new machine is not ready), and operating with degraded memory (you configured this months or years ago).
What to document and how
The question of what to document has a simple heuristic: document anything that differs from the default and anything whose absence would cost you more than five minutes to rediscover. This captures the essential configurations without requiring you to transcribe every setting in every application.
For each tool in your core stack, your documentation should answer four questions. First, what is installed? The name, the version, and where you got it. This sounds obvious until you are trying to remember whether you installed a tool from its official website, a package manager, a beta channel, or a colleague's recommendation. Second, what is configured? Every setting you changed from the factory default, recorded as a specific value — not "I changed the font" but "Font: JetBrains Mono, 14pt, line height 1.6." Third, why is it configured that way? This is the element most people skip, and it is the most valuable. Six months from now, you will encounter the setting and wonder whether it matters. The reasoning tells you. "Font: JetBrains Mono, 14pt — chosen for ligature support in code and readability at arm's length on the 27-inch monitor" gives your future self the context to decide whether the same choice applies to a different monitor or a different context. Fourth, how do you reproduce it? Is there a configuration file you can copy? A settings export you can import? A script that automates the setup? The reproduction method determines whether your documentation is a reference you read or a recipe you execute.
Soenke Ahrens, in How to Take Smart Notes (2017), describes the Zettelkasten method's core insight: notes are not primarily for recording information. They are for connecting information, creating a network of ideas that supports thinking. Your tool documentation works the same way. Individual configuration records are useful. But the real value emerges when you connect them — when your text editor documentation links to your terminal documentation (because certain editor commands invoke terminal tools), which links to your version control documentation (because the terminal aliases wrap Git commands), which links to your project structure documentation (because the Git workflow depends on how you organize your files). The documentation becomes a map of your cognitive infrastructure, revealing dependencies and relationships that are invisible when each tool is configured in isolation.
The format of your documentation matters less than its persistence and accessibility. Some people use Markdown files in a Git repository. Some use a dedicated notebook in their note-taking app. Some use a wiki. The medium is secondary. What matters is that the documentation is stored in a place you will not lose, in a format you can read without specialized tools, and in a structure that scales as your tool stack grows. A single Markdown file per tool, stored in a version-controlled repository, is a pattern that has proven itself across thousands of practitioners in the dotfiles community. It is simple, portable, and grep-able.
The maintenance problem
Creating documentation is the easy part. Maintaining it is where most people fail.
The challenge is configuration drift — the gradual divergence between what your documentation says and what your tools actually do. You tweak a setting on Tuesday and do not update the document. You install a new plugin on Thursday and forget to record it. You remove an alias on Saturday because it conflicted with a new tool. Each individual change is small. Over three months, the accumulated drift renders your documentation unreliable. And unreliable documentation is arguably worse than no documentation, because it gives you the false confidence that you can recover your setup when in fact the recovery will be partial and subtly wrong.
The DevOps community solved this problem for server infrastructure by making the documentation the source of truth. In an Infrastructure as Code workflow, you do not configure the server and then document what you did. You write the configuration in a file, and the file configures the server. The documentation and the implementation are the same artifact. Any change to the server starts as a change to the file. Drift is structurally impossible because there is only one source of truth.
You can apply the same principle to your personal setup. Instead of documenting your configurations in a separate file and then manually applying them, store the actual configuration files — your dotfiles, your settings exports, your extension lists — in a version-controlled repository. The repository is both the documentation and the implementation. When you change a setting, you change the file in the repository and apply it. When you set up a new machine, you clone the repository and run an installation script. The documentation cannot drift because it is the configuration.
For tools that do not support file-based configuration, you need a maintenance cadence. A monthly review — five to ten minutes, added to the monthly audit cadence from your information processing habit — during which you open your tool documentation and verify that it still matches reality. Update anything that has drifted. Remove anything you no longer use. Add anything new. This is not glamorous work. It is the cognitive equivalent of changing your oil. But the cost of skipping it compounds silently until the day you need the documentation and discover it describes a system that no longer exists.
The bus factor of one
In software teams, the "bus factor" is a grim but useful metric: how many team members would need to be hit by a bus before the project becomes unrecoverable? A project with a bus factor of one — where a single person holds all the critical knowledge — is dangerously fragile. When that person leaves, retires, or is unavailable, the team loses knowledge that may be irreplaceable.
Your personal tool setup has a bus factor of one, and that one person is you. But the threat is not a bus. It is time. The "you" who configured this system six months ago is, in a meaningful cognitive sense, a different person from the "you" who needs to recover it today. You have forgotten things. You have changed contexts. You have overwritten those neural pathways with six months of other information. Without documentation, you are relying on a version of yourself that no longer exists to provide knowledge that only that version possessed.
This is not hypothetical. Ask anyone who has migrated to a new machine without documentation. Ask anyone who has restored from a backup and discovered that the backup captured the files but not the configurations. Ask anyone who has tried to help a colleague set up a similar workflow and realized they cannot articulate the steps — they just "know" how their system works, the way you "know" how to ride a bicycle: procedurally, automatically, without the ability to decompose the knowledge into explicit steps. Documentation forces decomposition. It transforms tacit knowledge — the kind that lives in your fingers and habits — into explicit knowledge — the kind that survives across time, machines, and contexts.
The Third Brain
AI tools transform the practice of tool documentation from a manual authoring task into a collaborative and partially automated one. When you are configuring a tool, you can describe what you are doing to an AI assistant in natural language — "I just changed my terminal to use the Catppuccin color scheme because the default colors were causing eye strain during late-night sessions" — and the AI can format that into structured documentation that matches whatever template you use. The barrier to documenting in the moment drops from "open the file, find the right section, write a properly formatted entry" to "say what you just did."
More powerfully, AI can audit your existing configurations and generate documentation retroactively. Feed it your dotfiles, your settings exports, your extension lists, and ask it to produce a human-readable document explaining what each setting does and how it differs from the default. This does not capture your reasoning — only you know why you chose a specific configuration — but it captures the what, which is the part most people never write down at all. You can then annotate the AI-generated documentation with your reasoning, filling in the why at a fraction of the effort of writing everything from scratch. The AI handles the tedious enumeration. You provide the judgment and context that make the documentation truly useful.
The bridge to offline capability
You now understand why documenting your tool configurations is not administrative overhead but a structural investment in the resilience of your cognitive infrastructure. The documentation makes your setup reproducible, your decisions explicit, and your future self capable of recovering what your present self built.
But documentation assumes you can access it when you need it. And this raises a question that the next lesson addresses directly: what happens when you cannot reach the cloud, the server, the network where your documentation lives? What happens when the tool itself — the one you so carefully documented — requires an internet connection to function, and the internet is not available? The next lesson explores why offline capability is not a convenience feature but a reliability requirement, and why tools that work without internet are more trustworthy for critical work than those that depend on a connection you cannot guarantee.
Your configurations deserve documentation. And that documentation — along with the tools it describes — deserves to be accessible even when the network is not.
Sources:
- Ebbinghaus, H. (1885). Uber das Gedachtnis: Untersuchungen zur experimentellen Psychologie [Memory: A Contribution to Experimental Psychology]. Duncker & Humblot.
- Risko, E. F., & Gilbert, S. J. (2016). "Cognitive Offloading." Trends in Cognitive Sciences, 20(9), 676-688.
- Dennett, D. C. (1996). Kinds of Minds: Toward an Understanding of Consciousness. Basic Books.
- Knuth, D. E. (1984). "Literate Programming." The Computer Journal, 27(2), 97-111.
- Gawande, A. (2009). The Checklist Manifesto: How to Get Things Right. Metropolitan Books.
- Ahrens, S. (2017). How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking. CreateSpace.
- Morris, K., & Villalba, G. (2020). Infrastructure as Code: Dynamic Systems for the Cloud Age (2nd ed.). O'Reilly Media.
- Holman, Z. (2011). "Dotfiles Are Meant to Be Forked." zachholman.com.
Frequently Asked Questions