AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

Topics

Model Lifecycle and Monitoring

Governing AI from development through deployment to retirement. Covers the registries, monitoring systems, explainability standards, and incident response processes that keep AI systems operating safely in production, and that give regulators and auditors the evidence they need.

Key board-level questions

  • 1.Do we continuously monitor for model drift, bias, and performance degradation?
  • 2.Are all models registered, versioned, and explainable in production?
  • 3.Do we have real-time observability and incident response for AI systems?
  • 4.Can we produce audit-ready evidence for every AI-driven decision?

Regulatory frameworks

US

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

ISO/OECD/UN

ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System

The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.

US

NIST AI 600-1 Generative AI Profile

A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.

EU

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

US

Treasury Department AI Risk Management Framework for Financial Services

The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.

Playbook guidance