AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← AI Governance Playbook

Question 23 of 34

How do we build and maintain an AI model registry?

Published by AI Governance Institute · Practical Governance for Enterprise AI

A model registry is the operational backbone of AI governance: it tracks what models are in production, who owns them, what data they were trained on, what their risk classification is, and when they were last reviewed.

If you only do 3 things, do this:

  1. 1.No model goes to production without a registry entry and a completed risk assessment. Make this a hard gate, not a guideline.
  2. 2.Every model update — including prompt changes for generative AI — creates a new registry version and triggers a review of whether the risk classification still holds.
  3. 3.The registry must be queryable in under a minute. If it takes days to answer "which systems process health data?" you have a spreadsheet, not a registry.

The Situation

Who this is for: ML engineering, data science, and governance teams responsible for tracking production AI systems

When you need this: When building an AI governance program, or when a regulatory examination or incident reveals that production systems are not fully documented

The Decision

Do we have a single authoritative source of truth for all AI systems in production, and is it connected to actual governance workflows?

The Steps

  1. 1Define the required fields for each registry entry (minimum: ID, version, owner, use case, training data, risk tier, applicable regulations, last review date)
  2. 2Inventory all existing production AI systems and create registry entries retroactively
  3. 3Establish the hard gate: no new model enters production without a completed entry
  4. 4Implement model versioning: every update creates a new version record with a changelog
  5. 5Connect the registry to governance workflows: deployment approvals, monitoring alerts, periodic reviews, incident escalation
  6. 6Build a queryable interface so cross-cutting questions (by data type, risk tier, regulation) can be answered quickly

The Artifacts

  • Model registry data schema (required and optional fields)
  • Registry entry creation checklist (pre-deployment gate)
  • Model versioning policy (what changes trigger a new version + governance review)
  • Registry governance workflow diagram (deployment, monitoring, incident response)
  • Model retirement process (decommissioning steps and registry update)

The Output

A complete, current model registry with entries for all production systems, connected to deployment and monitoring workflows, queryable by key dimensions, and updated on every model change.

What belongs in a model registry

A model registry is a centralized record of all AI models in production or under active development. At minimum, each entry should capture the model identifier and version, the business owner and technical owner, a description of the use case and the decisions it influences, the training data sources and the date they were last updated, the risk tier assigned during the initial assessment, the regulatory frameworks that apply to the system, and the date of the last governance review.

Beyond these baseline fields, mature registries also track model lineage (what earlier versions or base models this system derives from), explainability approach, monitoring thresholds that trigger a review, and incident history. The registry should be queryable so that you can answer questions like "which models process personal health data?" or "which systems are classified as high risk under the EU AI Act?" in under a minute, not over several days.

Versioning, change management, and retirement

Every model update, whether a full retrain, a fine-tuning pass, or a change to prompts or system instructions for a generative AI system, should create a new registry version and trigger a review of whether the risk classification still holds. Organizations that treat model updates as routine software releases and skip the governance review are a common source of regression incidents, where a model that performed acceptably begins behaving unexpectedly after an update.

Retirement is as important as deployment. Models that are no longer in active use should be formally decommissioned in the registry rather than left in an ambiguous state. Decommissioned models should not retain active credentials or data access rights. The registry entry should document when the system was retired, why, and whether any replacement was deployed. This creates an audit trail that is particularly important when a regulator asks about a system that was previously in use.

Connecting the registry to governance workflows

A model registry disconnected from actual governance decisions is a documentation artifact, not a governance tool. The registry should be integrated with the workflows that matter: deployment approvals, periodic review triggers, incident escalations, and regulatory reporting.

Concretely, no model should enter production without a registry entry and a completed risk assessment. Automated monitoring tools should write performance metrics back to the registry so the record reflects current state, not just the state at deployment. The periodic review schedule should be driven by registry data, with high-risk systems reviewed more frequently than low-risk ones. When an incident occurs, the registry is the first place responders look to understand the system's configuration, ownership, and history. When a regulator asks what AI systems you operate, the registry is the document you produce.

Governance Controls

Operational controls that implement the guidance in this playbook.