Retraining Governance

Tier 2 IMPROVE

What This Requires

Define governance process for model retraining: trigger criteria (drift, performance degradation, new data), approval requirements, validation testing, and deployment. Require change ticket and approval for production retraining.

Why It Matters

Retraining is high-risk. Poor governance leads to degraded models, biased outputs, or broken systems. Structured process ensures quality and accountability.

How To Implement

Trigger Criteria

Define when retraining required: drift detected, accuracy drops >5%, new data available (e.g., quarterly refresh), regulatory change requiring updated training data.

Approval Process

Require change ticket for retraining. Include: trigger reason, new data summary, validation plan, rollback plan. Approval by data science lead + security review for high-risk models.

Validation Testing

Test retrained model: accuracy on holdout set, bias metrics, performance (latency, throughput), regression testing (known good inputs still pass). Document results in ticket.

Deployment

Follow standard deployment process (canary, rollback plan). Update model version and metadata (training date, dataset version). Notify stakeholders.

Evidence & Audit

  • Retraining governance policy document
  • Change tickets for recent retraining with approvals
  • Validation test results and approval records
  • Deployment logs showing retraining deployments
  • Stakeholder notification records

Related Controls