AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

FATF AI Anti-Money Laundering Guidance

Issued by

Financial Action Task Force (FATF)

liveEffective 2021-10-01FATF AI AML GuidanceVerified April 2026
Official document →

FATF guidance on the use of AI and machine learning in anti-money laundering, counter-terrorism financing, and proliferation financing compliance, setting expectations for responsible AI adoption in transaction monitoring, customer due diligence, and suspicious activity detection.

Applies To

Banksfinancial institutionsmoney services businessesvirtual asset service providers (VASPs)insurance companiesreal estate agentsand all other entities subject to FATF Recommendations in member jurisdictions. Also relevant to national financial intelligence units (FIUs)AML supervisorsand technology vendors supplying AI-based transaction monitoringKYCand customer risk-scoring solutions.

Overview

The Financial Action Task Force (FATF), the global standard-setter for anti-money laundering (AML) and counter-terrorism financing (CFT), published guidance on opportunities and challenges of new technologies for AML/CFT in 2021, with specific attention to artificial intelligence and machine learning applications. FATF's guidance acknowledges that AI and ML tools offer significant potential for enhancing the detection of financial crime, improving customer risk scoring, and reducing false positives in transaction monitoring systems. At the same time, FATF identifies risks associated with AI adoption in the AML/CFT context, including model bias, lack of explainability in regulatory submissions, data quality issues, and the risk that over-reliance on automated systems could undermine human judgment and accountability. The guidance is addressed both to private sector reporting entities-including banks, money services businesses, virtual asset service providers, and other obligated entities-and to national financial intelligence units and supervisors. FATF does not have binding legislative authority but its Recommendations form the basis of AML/CFT legislation in over 200 jurisdictions, and non-compliance with FATF standards can result in a jurisdiction being placed on the FATF grey list or black list, with severe reputational and market access consequences. The guidance encourages regulators to adopt a risk-based and technology-neutral approach when evaluating AI-based compliance systems, and supports regulatory sandboxes and public-private partnerships as mechanisms for advancing responsible AI innovation in financial crime compliance.

Key Requirements

  • Reporting entities using AI/ML for AML/CFT must be able to demonstrate to supervisors how their models arrive at risk ratings and suspicious activity flags—adequate explainability is required.
  • Model governance frameworks must include regular validation, back-testing, and performance monitoring of AI/ML systems used in financial crime detection.
  • Data used to train AI/ML models must be relevant, of sufficient quality, and free from biases that could result in discriminatory or ineffective outcomes.
  • Human oversight must be maintained; AI/ML outputs should augment rather than replace the judgment of qualified AML compliance officers.
  • Third-party AI vendors must be subject to due diligence, and contractual arrangements must preserve the institution's ability to meet its regulatory obligations.
  • Supervisors should develop the technical capacity to assess and audit AI/ML-based AML systems during examinations.
  • Jurisdictions and financial institutions are encouraged to participate in public-private partnerships to improve financial crime data sharing and AI model development.

What Your Organization Must Do

  • Establish an AI model governance framework covering validation, back-testing, and ongoing performance monitoring for all ML systems used in transaction monitoring, customer risk scoring, and suspicious activity detection, assigning ownership to the Chief Compliance Officer or Head of AML.
  • Conduct explainability assessments for each AI/ML model in production and prepare regulator-ready documentation showing how the model generates risk ratings and suspicious activity flags, updated at least annually or following any significant model change.
  • Audit training data sets for quality, relevance, and bias before deployment and on a scheduled basis (at minimum annually), engaging the data governance team to remediate gaps and document findings for examiner review.
  • Implement a human-in-the-loop control requiring qualified AML compliance officers to review and approve AI-generated outputs before filing suspicious activity reports or applying high-risk customer designations, with clear escalation procedures.
  • Perform third-party due diligence on all AI/ML vendors supplying AML-related tools, ensuring contracts explicitly preserve the institution's ability to access model logic, audit results, and substitute vendors without regulatory disruption, reviewed at onboarding and at each contract renewal.
  • Engage with national regulators and financial intelligence units through available public-private partnership or regulatory sandbox programs to stay aligned with evolving supervisory expectations for AI-based AML systems and reduce the risk of adverse findings during examinations.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Is the FATF AI AML Guidance legally binding on banks and financial institutions?
No. FATF guidance is non-binding, but it carries significant indirect authority because FATF Recommendations underpin AML/CFT legislation in over 200 jurisdictions. National supervisors routinely incorporate FATF standards into examination frameworks, making compliance a practical necessity even without direct legal force.
What explainability standard does FATF expect for AI-based transaction monitoring systems?
FATF requires that reporting entities be able to demonstrate to supervisors how their AI and ML models generate risk ratings and suspicious activity flags. Institutions should maintain regulator-ready documentation of model logic, updated at least annually or after any significant model change.
Can a bank rely entirely on AI outputs to file a suspicious activity report under FATF standards?
No. FATF explicitly requires that human oversight be maintained and that AI outputs augment rather than replace the judgment of qualified AML compliance officers. A human-in-the-loop control must be in place before filing suspicious activity reports or assigning high-risk customer designations.
What due diligence obligations apply when a financial institution uses a third-party AI vendor for AML compliance?
Institutions must perform vendor due diligence and ensure contracts preserve access to model logic, audit rights, and the ability to substitute vendors without regulatory disruption. These arrangements should be reviewed at onboarding and at each contract renewal.
How does the FATF AI AML Guidance address model bias and data quality risks?
The guidance requires that training data be relevant, of sufficient quality, and free from biases that could produce discriminatory or ineffective outcomes. Institutions should audit datasets before deployment and on a scheduled basis, documenting findings for examiner review.
What are the consequences for a jurisdiction that fails to align its AML framework with FATF standards on AI?
Jurisdictions that fall out of compliance with FATF standards risk being placed on the FATF grey list or black list, which can trigger severe reputational damage and restricted market access. This creates strong regulatory pressure on national supervisors to incorporate FATF AI guidance into domestic examination practices.