Principlev1
Evaluate rival experts by assessing the structure and
Evaluate rival experts by assessing the structure and transparency of their arguments, agreement from other experts, appraisals from meta-experts, evidence of interests and biases, and their track records of calibrated prediction.
Why This Is a Principle
Derives from The performance of an agent is bounded by the accuracy of (agent performance bounded by world model accuracy), Systematic Overconfidence Taxonomy (overconfidence), and Domain-Specific Calibration Development (calibration from feedback). This is Goldman's framework operationalized—it prescribes specific evaluation criteria that follow from axioms about knowledge reliability and calibration. Actionable and general.