Model Drift Detection

Tier 1 MONITOR

What This Requires

Implement automated drift detection monitoring input distribution (data drift) and output quality (concept drift). Alert when drift exceeds threshold and trigger retraining workflow. Review drift reports monthly.

Why It Matters

Models degrade over time as real-world data diverges from training data. Undetected drift causes poor predictions and user dissatisfaction. Proactive detection enables timely retraining.

How To Implement

Data Drift Detection

Monitor input feature distributions (mean, std, percentiles). Compare to baseline (training data distribution). Use statistical tests (KS test, chi-square). Alert if p-value <0.05.

Concept Drift Detection

Monitor output quality: prediction accuracy, error rate, user feedback (thumbs up/down). Compare to baseline. Alert if accuracy drops >5% or error rate increases >2%.

Alerting & Workflow

Alert data science team when drift detected. Trigger automated retraining workflow (collect new data, retrain, validate, deploy). Document decision to retrain or accept drift.

Monthly Review

Review drift reports: which models drifting, retraining history, false positive alerts. Tune thresholds to reduce noise.

Evidence & Audit

  • Drift detection implementation (code, config)
  • Baseline data distribution documentation
  • Alert configuration and incident history
  • Retraining workflow documentation
  • Monthly drift review reports

Related Controls