AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Veritas Consortium AI Fairness Testing Methodology

Issued by

Monetary Authority of Singapore (MAS), in partnership with the Veritas Consortium of financial institutions

liveEffective 2022-01-01Veritas FEATVerified May 2026
Official document →

A sector-specific AI fairness, ethics, accountability, and transparency (FEAT) assessment methodology developed by MAS and a consortium of financial institutions to operationalise responsible AI in financial services.

Applies To

Large enterpriseAI developerAI deployer

Overview

The Veritas Consortium was established by the Monetary Authority of Singapore (MAS) in 2019 to translate its Fairness, Ethics, Accountability and Transparency (FEAT) Principles for financial institutions into concrete, testable assessment methodologies. The consortium has included major banks, insurers, and fintech firms operating in Singapore. The Veritas methodology provides financial institutions with a practical toolkit for assessing AI-driven use cases against each of the four FEAT dimensions. Phase 1 of the project focused on three financial sector use cases: customer marketing, credit risk scoring, and fraud detection. Phase 2 expanded the methodology and produced open-source assessment tools. Phase 3 extended the work to additional use cases and incorporated feedback from pilot deployments. The Veritas framework operationalises FEAT through quantitative fairness metrics, documentation requirements, and governance process checks tailored to the data and model types common in financial services. MAS has referenced the Veritas methodology in its broader guidance to financial institutions on responsible AI and it is treated as an expected standard of practice rather than a legally binding mandate. Financial institutions subject to MAS oversight should treat Veritas alignment as part of their supervisory risk posture.

Key Requirements

  • Financial institutions are expected to assess AI systems against the four FEAT dimensions: Fairness, Ethics, Accountability, and Transparency
  • Fairness assessments must include quantitative metric selection appropriate to the use case, with documented justification for chosen metrics
  • Credit risk scoring, customer marketing, and fraud detection AI systems are specifically addressed with use-case-specific testing protocols
  • Governance documentation must evidence human oversight arrangements, escalation procedures, and model lifecycle management
  • Open-source assessment tools developed by the consortium should be used or materially equivalent alternatives must be documented
  • Institutions must maintain audit trails of FEAT assessments sufficient to support MAS supervisory review
  • Periodic reassessment of AI systems is expected as models drift or deployment contexts change

What Your Organization Must Do

  • Conduct a FEAT gap analysis across all production AI models used in customer-facing financial services decisions, starting with credit, marketing, and fraud use cases
  • Implement the Veritas open-source assessment toolkit into model risk management workflows and document metric selection rationale for each use case
  • Align FEAT documentation with existing MAS Technology Risk Management Guidelines and Notice requirements to create a unified AI risk file
  • Establish a model reassessment schedule that triggers FEAT reviews upon material model changes, data drift events, or changes in deployment population
  • Brief model risk committee and board risk committee on Veritas methodology outputs as part of regular AI governance reporting
  • Engage internal audit to incorporate Veritas assessment coverage into the AI audit programme, given MAS supervisory expectations