Financial Stability Board AI in Finance
FSB AI in Finance · Financial Stability Board (FSB)
The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.
Overview
The Financial Stability Board (FSB) published its report on artificial intelligence and machine learning in financial services in November 2017, with subsequent thematic reviews and follow-up publications including a 2022 report examining AI governance and risk management frameworks. The FSB, as the international body tasked with monitoring and making recommendations about the global financial system, identified AI and ML adoption in financial services as a potential source of systemic risk warranting coordinated international attention. The guidance addresses how AI/ML tools are used across financial intermediation, credit underwriting, market trading, regulatory compliance, and risk management. The FSB highlights concerns including model opacity, data dependency, third-party concentration risk stemming from reliance on a small number of cloud and data providers, and the potential for correlated behavior across institutions using similar algorithms. The FSB does not issue binding regulations but its recommendations carry significant weight in shaping the policy agendas of member jurisdictions including the G20 nations. National supervisors and standard-setting bodies routinely reference FSB outputs when developing their own domestic AI governance frameworks for financial institutions. The FSB collaborates closely with the Basel Committee on Banking Supervision (BCBS), the International Organization of Securities Commissions (IOSCO), and the Committee on Payments and Market Infrastructures (CPMI) in developing cross-sectoral guidance.
Key Requirements
- •Financial institutions should maintain robust model risk management frameworks that explicitly address AI and ML models, including validation, monitoring, and documentation.
- •Governance structures must assign clear accountability for AI/ML decisions at senior management and board level.
- •Institutions should assess and mitigate third-party and concentration risks arising from reliance on common cloud service providers and data vendors.
- •Explainability and transparency of AI/ML outputs should be commensurate with the materiality and risk of the application.
- •Supervisors are encouraged to develop AI-specific examination and oversight capabilities.
- •Cross-border regulatory cooperation and information sharing on AI-related risks is recommended.
- •Institutions should monitor AI/ML systems for emergent systemic risk patterns, including herding behavior and correlated model outputs.
Effective Date
2022-10-12