Principlev1
Monitor for concept drift by regularly checking model
Monitor for concept drift by regularly checking model predictions against current ground truth rather than relying on internal confidence as validity signal.
Why This Is a Principle
Derives from Brain as Hierarchical Prediction Machine (brain as prediction machine) and The performance of an agent is bounded by the accuracy of (agent performance bounded by world model accuracy). Concept drift in ML demonstrates that models don't self-signal obsolescence. The principle prescribes active monitoring—following from axioms about prediction and model dependence.