Three filters for AI contradictions: unconsidered evidence? Verifiable reasoning error? Or just a different conclusion without showing work? Only the first two warrant revision
After encountering AI recommendations that contradict your careful analysis, apply three filters in sequence: Does this present unconsidered evidence? Does this identify verifiable reasoning errors? Does it merely state a different conclusion without showing work? Only the first two warrant revision.
Why This Is a Rule
When AI contradicts your careful analysis, two failure modes pull in opposite directions. Excessive deference: "The AI is probably right and I'm probably wrong" — abandoning your analysis because the AI's confident output feels more authoritative. Excessive stubbornness: "The AI doesn't understand my situation" — dismissing the contradiction without examination. The three-filter sequence navigates between these by evaluating the substance of the contradiction, not the source.
Filter 1 — Unconsidered evidence: does the AI present factual information or data you hadn't considered? If yes → this is new input that your analysis was missing. Incorporate it and re-evaluate. This is genuine improvement. Filter 2 — Verifiable reasoning error: does the AI identify a specific flaw in your logic that you can independently verify? "Your analysis assumes X, but X doesn't hold when Y" — can you check whether Y actually invalidates X? If the error is verifiable → correct your reasoning. Filter 3 — Different conclusion without work shown: does the AI simply state a different conclusion without providing new evidence or identifying errors? "I recommend Z instead" without explaining why your analysis is wrong → this is AI confidence, not AI reasoning. Do not revise your careful analysis based on unsupported AI assertion.
The sequence matters: check for genuine new information first, then for reasoning errors, then classify the remainder as unsupported assertion.
When This Fires
- When AI output contradicts a conclusion you reached through your own careful analysis
- When deciding whether to revise your position based on AI feedback
- After completing the defense test (Apply the defense test to AI conclusions: 'Could I defend this without the AI? Can I identify where it might be wrong?') and the AI still disagrees
- Complements AI generates inputs, you synthesize judgment — outsourcing the integration function is epistemic abdication regardless of input quality (AI generates inputs, you synthesize judgment) with the specific protocol for handling AI-human disagreement
Common Failure Mode
Deferring to AI confidence: "It said something different from my analysis, so maybe I'm wrong." AI confidence is uniform — it states conclusions with the same level of assurance regardless of whether the conclusion is well-founded. Your careful analysis, which considered context, weighed evidence, and applied domain knowledge, should not be abandoned because an AI stated a different conclusion confidently.
The Protocol
(1) When AI contradicts your careful analysis, apply filters in order: Filter 1: does the AI present evidence I didn't consider? Specific facts, data, or sources I missed? If yes → incorporate the new evidence and re-evaluate. Filter 2: does the AI identify a verifiable error in my reasoning? A specific assumption that's wrong, a logical step that doesn't follow, a calculation error? If yes → verify independently. If the error is real → correct it. Filter 3: does the AI simply state a different conclusion? No new evidence, no identified errors, just "I think Y instead of X"? If so → this is unsupported assertion. Maintain your analysis. (2) Only Filters 1 and 2 warrant revision. Filter 3 is AI confidence, not AI reasoning, and your careful analysis outranks it.