AI Verify Testing Framework
Issued by
Infocomm Media Development Authority (IMDA), Government of Singapore
A voluntary AI governance testing framework and toolkit developed by IMDA that enables organisations to demonstrate responsible AI through standardised technical tests and process checks.
Applies To
Overview
AI Verify is Singapore's AI governance testing framework, first launched by the Infocomm Media Development Authority (IMDA) as a minimum viable product in June 2022 and subsequently expanded through iterative releases and an international pilot. The framework provides a structured methodology for organisations to test and document AI systems against a set of internationally recognised AI ethics principles, including transparency, explainability, fairness, safety, and accountability. AI Verify combines two components: technical testing of AI models using automated tests integrated into a software toolkit, and process checks that assess the governance and documentation practices surrounding AI development and deployment. Results are compiled into an AI Verify report, which organisations can share with regulators, business partners, or customers as evidence of responsible AI practices. The framework is aligned with major international AI governance references including the OECD AI Principles, the EU's ALTAI guidelines, and ISO/IEC standards. IMDA has worked with international partners including the US NIST AI RMF and the UK ICO to explore interoperability. AI Verify is voluntary and does not carry enforcement consequences, but it is increasingly referenced in public sector procurement and financial sector guidance in Singapore. A foundation model evaluation extension was introduced in 2023 to address large language model-specific risks.
Key Requirements
- •Voluntary participation; no legally mandated obligation for private sector organisations
- •Organisations using the framework must test AI systems against eleven AI ethics principles including fairness, transparency, explainability, robustness, and data governance
- •Technical tests must be conducted using the AI Verify software toolkit or an equivalent approved methodology
- •Process checks require documented evidence of internal governance practices including risk management, human oversight, and stakeholder engagement
- •AI Verify reports must accurately reflect test results and must not be used in a misleading manner
- •Foundation model evaluation module applies additional checks for large language models covering safety, hallucination, and alignment properties
- •Organisations should maintain records of testing artefacts sufficient to support third-party verification if requested
What Your Organization Must Do
- →Conduct a scoping exercise to identify which AI systems in your Singapore operations are candidates for AI Verify testing, prioritising customer-facing and high-risk applications
- →Assign a technical lead to integrate the AI Verify toolkit into existing model validation and MLOps pipelines to reduce duplication of effort
- →Prepare process check documentation by cross-referencing existing AI governance policies against the eleven AI ethics principles assessed by the framework
- →Use AI Verify reports as a disclosure artefact in vendor due diligence, regulatory engagement, and public sector tender responses
- →Engage with IMDA's AI Verify Foundation for updates to the toolkit, particularly for generative AI and foundation model modules that are under active development
- →Assess interoperability between AI Verify outputs and requirements under the EU AI Act, NIST AI RMF, and ISO 42001 to maximise reuse across jurisdictions
