AI in Financial Services
Financial services firms face some of the most prescriptive AI compliance obligations globally. Credit decisioning, fraud detection, AML screening, and customer-facing AI are each subject to overlapping regulatory expectations from prudential supervisors, securities regulators, and consumer protection authorities. This topic covers the frameworks, enforcement actions, and governance practices most relevant to banks, insurers, asset managers, and fintechs.
Key board-level questions
- 1.Are our AI-driven credit, underwriting, and fraud models explainable to regulators and affected customers?
- 2.Do we have model risk management governance (SR 11-7 equivalent) extended to AI systems?
- 3.How do we demonstrate that our AI tools do not produce discriminatory outcomes in lending or insurance?
- 4.Are we monitoring AI vendors and third-party models under the same risk framework as proprietary systems?
Regulatory frameworks
Treasury Department AI Risk Management Framework for Financial Services
The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.
EU Digital Operational Resilience Act
The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.
Financial Stability Board AI in Finance
The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.
FATF AI Anti-Money Laundering Guidance
FATF guidance on the use of AI and machine learning in anti-money laundering, counter-terrorism financing, and proliferation financing compliance, setting expectations for responsible AI adoption in transaction monitoring, customer due diligence, and suspicious activity detection.
SEC AI Governance Guidance
The SEC has issued rules, guidance, and proposed rulemaking addressing AI governance obligations for registered investment advisers, broker-dealers, and public companies, focusing on conflicts of interest in predictive data analytics, AI-related disclosures in securities filings, and examination priorities targeting algorithmic systems.
Playbook guidance
How do we apply a three lines of defense model to AI risk?
How do we detect and mitigate algorithmic bias?
What is our explainability standard for AI decisions?
What does audit-ready AI documentation look like in practice?
How do we ensure third-party AI vendors meet our standards?
What does meaningful human oversight look like for high-risk AI decisions?
