AI Model Change Documentation
Record what changed between model versions, why the change was made, what testing was performed, and who approved the deployment.
Objective
Maintain an auditable record of AI system evolution that supports accountability, incident investigation, and regulatory examination.
Maturity Levels
Initial
Changes are not documented; the reason for past decisions cannot be reconstructed.
Developing
Major changes are documented informally but minor changes and the rationale behind them are not captured.
Defined
A standard change record template captures the change description, rationale, test results, approver, and deployment date for all production changes.
Managed
Change documentation is reviewed during governance meetings; quality and completeness are assessed.
Optimizing
Change records are automatically generated from deployment pipeline artifacts and linked to evaluation results.
Evidence Requirements
What an auditor or assessor would expect to see for this control.
- —Change log entries for all model updates including description, rationale, version before and after, test summary, and approver
- —Updated model cards or technical documentation at each version reflecting current model characteristics
- —Traceability records linking each change to the business or regulatory requirement that drove it
- —Stakeholder notification records confirming downstream users were informed of material changes before deployment
- —Documentation completeness review records confirming all required fields were populated before deployment approval
Implementation Notes
Key steps
- Use a standard template for every model change record: what changed, why, what tests were run, what risks were identified, who approved, and the deployment date.
- Retain change records for at least as long as the model is in production plus your required audit retention period.
- Link change records to incident records: when an incident occurs, the change history should be one of the first things examined.
- For third-party models, document the vendor's stated change description alongside your internal validation results.
Example Implementation
MLOps team shipping a prompt update to a risk-tier Critical AI system
Model Change Record — Risk Scoring System
| Field | Value |
|---|---|
| Change ID | CHG-2026-0147 |
| System | Credit Risk Scorer |
| Change type | Prompt update (system prompt v3.1 → v3.2) |
| Change description | Tightened output format constraints to prevent free-text rationales that failed downstream parsing |
| Reason | Production incident CHG-INC-0023 — 0.3% of outputs failed to parse, defaulting to manual review |
| Tests run | Regression suite (500 cases), format validation (1,000 cases), bias check |
| Test results | Regression: 0 regressions; Format pass rate: 100%; Bias check: pass (demographic parity delta < 1%) |
| Risk identified | None (prompt-only change; model weights unchanged) |
| Approvers | J. Rivera (ML Lead) · M. Okafor (Compliance) |
| Deployment date | 2026-04-14T11:30Z |
| Linked incident | INC-0023 |
| Rollback target | system-prompt-v3.1 |
Control Details
- Control ID
- CHM-004
- Domain
- Change Management
- Typical owner
- AI Engineering / Compliance
- Implementation effort
- Low effort
- Agent-relevant
- No
