AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

IMDA Model AI Governance Framework

Issued by

Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Singapore

liveEffective 2019-01-01IMDA AI Gov FrameworkVerified April 2026
Official document →

A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.

Applies To

Private sector organisations operating in Singapore that developprocureor deploy AI systemsincluding multinational enterprises with Singapore operations. Particularly relevant to financial serviceshealthcarehuman resourcesand consumer-facing sectors. Adoption is voluntary but increasingly referenced in procurement requirements and regulatory guidance from MAS and MOH.

Overview

The IMDA Model AI Governance Framework, first published in January 2019 and updated to its second edition in January 2020, is Singapore's primary voluntary reference document for the responsible deployment of AI systems by private sector organisations. Developed collaboratively by the Infocomm Media Development Authority and the Personal Data Protection Commission, the Framework translates high-level ethical AI principles into practical, implementable governance measures. It is structured around two core principles: that AI decision-making should be explainable, transparent, and fair; and that AI solutions must be human-centric. The Framework addresses four key governance areas: internal governance structures and measures, determining the level of human involvement in AI-augmented decisions, operations management, and stakeholder interaction and communication. It is technology-neutral and sector-agnostic, designed to complement existing sectoral regulations rather than replace them. A companion Implementation and Self-Assessment Guide (ISAGO) enables organisations to benchmark their practices. The Framework was extended in 2022 through the AI Verify toolkit, a software-based testing framework that operationalises many of its principles for technical validation. Singapore has positioned this Framework as a reference model in ASEAN AI governance harmonisation discussions and has submitted it as a contribution to international standardisation efforts at ISO and IEEE.

Key Requirements

  • Establish internal AI governance structures, including defined accountability for AI model ownership and oversight
  • Conduct and document risk assessments proportional to the probability and severity of harm from AI decisions
  • Determine and document the appropriate level of human oversight for each AI-augmented decision type, calibrated to risk
  • Implement minimum standards for data management, including lineage, quality controls, and bias detection
  • Maintain explainability of AI decisions to affected individuals at a level commensurate with the impact of those decisions
  • Conduct regular reviews and audits of deployed AI models for performance drift, fairness, and continued fitness for purpose
  • Communicate clearly with stakeholders about AI use, including disclosure where AI is involved in consequential decisions
  • Document model provenance, training data characteristics, and known limitations for all production AI systems

What Your Organization Must Do

  • Appoint a named AI governance owner (e.g., Chief Data Officer or Head of Risk) accountable for each production AI system, and document these assignments in a central AI inventory reviewed at least annually.
  • Complete a structured risk assessment for every AI use case before deployment, scoring each on probability and severity of harm, and use the output to set a minimum human oversight level (full automation, human-in-the-loop, or human-on-the-loop) that is formally signed off by the accountable owner.
  • Implement data management controls covering lineage tracking, quality checks, and bias detection for all training and inference datasets, and require written evidence of these controls as a condition of moving any model into production.
  • Prepare plain-language explainability statements for each AI system that makes consequential decisions affecting individuals (credit, employment, healthcare), calibrated to the impact level, and build delivery of these statements into customer-facing processes before go-live.
  • Run the IMDA AI Verify toolkit or an equivalent self-assessment against the Implementation and Self-Assessment Guide (ISAGO) for all material AI deployments, completing the first assessment within 90 days of launch and scheduling re-assessments at least every 12 months or after any significant model update.
  • Review AI-related procurement contracts and vendor due diligence questionnaires to include disclosure obligations aligned with the Framework, ensuring third-party AI suppliers can evidence governance controls equivalent to internal standards before contract execution.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Is the IMDA Model AI Governance Framework legally binding on Singapore companies?
No. The Framework is voluntary and principles-based. However, it is increasingly referenced in regulatory guidance from MAS and MOH, and appears in public sector procurement requirements, making adoption effectively expected in regulated industries even without a legal mandate.
Does the IMDA AI Governance Framework apply to multinational companies with Singapore operations?
Yes. Any private sector organisation that develops, procures, or deploys AI systems in Singapore is within scope, including multinationals with local operations. There is no minimum size threshold or sector restriction, though financial services, healthcare, and HR are the highest-priority areas.
How does the IMDA Framework relate to the AI Verify toolkit launched in 2022?
AI Verify is a software-based testing toolkit that operationalises the Framework's principles for technical validation. Organisations can use AI Verify to run structured assessments against the Framework's requirements and generate shareable test reports, functioning as the practical compliance layer on top of the governance principles.
What level of human oversight does the IMDA Framework require for automated AI decisions?
The Framework does not mandate a single standard. Organisations must assess each AI use case by probability and severity of harm, then assign and document a calibrated oversight level: full automation, human-in-the-loop, or human-on-the-loop. The higher the risk, the more direct human involvement is expected.
What explainability obligations does the IMDA Framework place on AI systems affecting individuals?
Organisations must provide explanations for AI-driven decisions at a level commensurate with the impact on the affected individual. For high-impact decisions such as credit, employment, or healthcare, plain-language explanations must be prepared in advance and built into customer-facing processes before deployment.
How does Singapore's IMDA Framework compare to the EU AI Act in terms of compliance obligations?
The IMDA Framework is voluntary and risk-calibrated, with no statutory penalties for non-compliance. The EU AI Act is binding law with fines up to 35 million euros or 7% of global turnover. Multinationals subject to both should treat the EU AI Act as the compliance floor and use the IMDA Framework to guide governance practices beyond mandatory requirements.