AI Regulation in Singapore
Singapore has built a principles-based AI governance framework designed to position the country as a trusted hub for responsible AI development in Asia. The Monetary Authority of Singapore's FEAT Guidelines establish fairness, ethics, accountability, and transparency expectations for financial institutions using AI in customer-facing and risk decisions. The IMDA Model AI Governance Framework (2019, updated 2020) provides sector-agnostic principles for responsible AI deployment applicable to any organization operating in Singapore.
Singapore's approach is collaborative rather than prescriptive. The Veritas Consortium, led by MAS, has developed technical testing methodologies for assessing AI fairness in financial services — including credit risk, insurance underwriting, and customer segmentation — giving institutions a concrete path to demonstrating FEAT compliance. The AI Verify testing framework, developed by IMDA, provides a structured way to evaluate AI systems against governance principles and generate evidence for stakeholder reporting.
The Personal Data Protection Act (PDPA) has been updated to address AI-related data obligations, including guidance on using personal data in model training and the application of data protection requirements to automated decision-making. Singapore's regulatory environment is relatively permissive compared to the EU but increasingly structured, with growing expectations that organizations document their AI governance posture and demonstrate compliance through testing rather than policy alone.
Key themes
- 1.MAS FEAT Guidelines — fairness, ethics, accountability, transparency in financial AI
- 2.IMDA Model AI Governance Framework for responsible deployment
- 3.Veritas Consortium technical fairness testing methodology
- 4.PDPA obligations for AI training data and automated decisions
Regulatory frameworks and guidance(4)
IMDA Model AI Governance Framework
A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.
Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector
The MAS FEAT Principles establish a voluntary but authoritative framework for financial institutions in Singapore to govern the responsible use of AI and data analytics, structured around four principles: Fairness, Ethics, Accountability, and Transparency.
Singapore Global AI Assurance Sandbox
The Singapore Global AI Assurance Sandbox provides a controlled regulatory environment for organizations to test and validate AI systems against safety, reliability, and accountability standards before or during deployment. It is open to enterprises and AI developers seeking structured assurance pathways under Singapore regulatory oversight. Participants operate under defined sandbox conditions that temporarily relax certain compliance obligations in exchange for structured testing commitments and reporting.
Singapore Personal Data Protection Act – AI and Automated Decision-Making Amendments
The Personal Data Protection (Amendment) Act 2020 and accompanying PDPC advisory guidelines address the use of personal data in AI and automated decision-making, introducing mandatory data breach notification, expanded accountability obligations, and guidance on responsible AI deployment under Singapore's Model AI Governance Framework.
