AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

← AI Governance Playbook

Question 23 of 24

How do we build and maintain an AI model registry?

A model registry is the operational backbone of AI governance: it tracks what models are in production, who owns them, what data they were trained on, what their risk classification is, and when they were last reviewed.

What belongs in a model registry

A model registry is a centralized record of all AI models in production or under active development. At minimum, each entry should capture the model identifier and version, the business owner and technical owner, a description of the use case and the decisions it influences, the training data sources and the date they were last updated, the risk tier assigned during the initial assessment, the regulatory frameworks that apply to the system, and the date of the last governance review.

Beyond these baseline fields, mature registries also track model lineage (what earlier versions or base models this system derives from), explainability approach, monitoring thresholds that trigger a review, and incident history. The registry should be queryable so that you can answer questions like "which models process personal health data?" or "which systems are classified as high risk under the EU AI Act?" in under a minute, not over several days.

Versioning, change management, and retirement

Every model update, whether a full retrain, a fine-tuning pass, or a change to prompts or system instructions for a generative AI system, should create a new registry version and trigger a review of whether the risk classification still holds. Organizations that treat model updates as routine software releases and skip the governance review are a common source of regression incidents, where a model that performed acceptably begins behaving unexpectedly after an update.

Retirement is as important as deployment. Models that are no longer in active use should be formally decommissioned in the registry rather than left in an ambiguous state. Decommissioned models should not retain active credentials or data access rights. The registry entry should document when the system was retired, why, and whether any replacement was deployed. This creates an audit trail that is particularly important when a regulator asks about a system that was previously in use.

Connecting the registry to governance workflows

A model registry disconnected from actual governance decisions is a documentation artifact, not a governance tool. The registry should be integrated with the workflows that matter: deployment approvals, periodic review triggers, incident escalations, and regulatory reporting.

Concretely, no model should enter production without a registry entry and a completed risk assessment. Automated monitoring tools should write performance metrics back to the registry so the record reflects current state, not just the state at deployment. The periodic review schedule should be driven by registry data, with high-risk systems reviewed more frequently than low-risk ones. When an incident occurs, the registry is the first place responders look to understand the system's configuration, ownership, and history. When a regulator asks what AI systems you operate, the registry is the document you produce.