Define the specific job the tool must do in one sentence before evaluating — "capture ideas on phone, retrieve on laptop" beats "note-taking app"
Define the specific job you're hiring a tool to do in one concrete sentence before evaluating candidates—vague category labels ('note-taking app') produce feature-based comparison while specific job definitions ('capture ideas on phone, retrieve on laptop at night') produce fitness-based selection.
Why This Is a Rule
When you evaluate a "note-taking app," you compare features: which has better formatting? More templates? Prettier graphs? This produces a feature-based comparison that selects for the tool with the most impressive feature list — which may not be the tool that does the specific job you need. A tool with 100 features that doesn't sync between phone and laptop fails the actual job (capture on phone, retrieve on laptop) despite winning every feature comparison.
Clayton Christensen's Jobs-to-be-Done framework provides the fix: define the specific job before evaluating candidates. "I need to capture ideas on my phone during commute and retrieve them on my laptop at night for processing" is a job definition. It immediately filters candidates: tools without mobile apps are eliminated regardless of other features. Tools without cross-device sync are eliminated. The remaining candidates are evaluated on fitness for this specific job, not on feature count.
The one-sentence constraint is a compression test: if you can't describe the job in one sentence, you haven't clarified it enough. "I need a tool for notes" is too vague. "I need to capture meeting action items during video calls and convert them into tasks with deadlines in my project management system" is a clear job that eliminates 80% of candidates instantly.
When This Fires
- Before beginning any tool evaluation process
- When tool comparisons feel endless because every tool has different strengths
- When you've been "researching tools" for weeks without converging on a choice
- Complements Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize (5-requirement satisficing) and Limit tool candidates to 2-3 before evaluating — more options increase decision time and decrease satisfaction regardless of objective quality (2-3 candidate limit) with the pre-evaluation scoping step
Common Failure Mode
Category-based evaluation: "I need a project management tool" → compare Asana vs. Monday vs. Notion vs. ClickUp across 50 features. Weeks later, you've read every comparison article and can't decide because each tool wins on different features. The actual job — "I need to track 5 active projects with weekly status updates for 3 team members" — would have eliminated half the options immediately.
The Protocol
(1) Before evaluating any tool, write one sentence: "I'm hiring a tool to [specific job including context, frequency, and success criteria]." (2) Test the sentence: does it contain enough specificity to eliminate candidates? "Note-taking app" eliminates nothing. "Capture voice memos during walks and auto-transcribe to searchable text within 1 hour" eliminates 90% of note-taking apps. (3) Use the job definition to generate 3-5 requirements (Pick a tool with 5 minimum requirements, select the first that meets them, commit for 90 days — satisfice, don't maximize): the capabilities the tool MUST have to do this specific job. (4) Evaluate candidates exclusively against the job definition and requirements. Features not relevant to the job are noise, not advantages. (5) The best tool for the job may not be the "best" tool by feature count. That's correct — you're hiring for fitness, not impressiveness.