aigovernance.com

Global AI Regulation & Framework Directory

← Directory

IMDA Model AI Governance Framework

IMDA AI Gov Framework · Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Singapore

A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.

Overview

The IMDA Model AI Governance Framework, first published in January 2019 and updated to its second edition in January 2020, is Singapore's primary voluntary reference document for the responsible deployment of AI systems by private sector organisations. Developed collaboratively by the Infocomm Media Development Authority and the Personal Data Protection Commission, the Framework translates high-level ethical AI principles into practical, implementable governance measures. It is structured around two core principles: that AI decision-making should be explainable, transparent, and fair; and that AI solutions must be human-centric. The Framework addresses four key governance areas: internal governance structures and measures, determining the level of human involvement in AI-augmented decisions, operations management, and stakeholder interaction and communication. It is technology-neutral and sector-agnostic, designed to complement existing sectoral regulations rather than replace them. A companion Implementation and Self-Assessment Guide (ISAGO) enables organisations to benchmark their practices. The Framework was extended in 2022 through the AI Verify toolkit, a software-based testing framework that operationalises many of its principles for technical validation. Singapore has positioned this Framework as a reference model in ASEAN AI governance harmonisation discussions and has submitted it as a contribution to international standardisation efforts at ISO and IEEE.

Key Requirements

  • Establish internal AI governance structures, including defined accountability for AI model ownership and oversight
  • Conduct and document risk assessments proportional to the probability and severity of harm from AI decisions
  • Determine and document the appropriate level of human oversight for each AI-augmented decision type, calibrated to risk
  • Implement minimum standards for data management, including lineage, quality controls, and bias detection
  • Maintain explainability of AI decisions to affected individuals at a level commensurate with the impact of those decisions
  • Conduct regular reviews and audits of deployed AI models for performance drift, fairness, and continued fitness for purpose
  • Communicate clearly with stakeholders about AI use, including disclosure where AI is involved in consequential decisions
  • Document model provenance, training data characteristics, and known limitations for all production AI systems

Effective Date

2019-01-01

Official source →