Financial Stability Board AI in Finance
Issued by
Financial Stability Board (FSB)
The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.
Applies To
Overview
The Financial Stability Board (FSB) published its report on artificial intelligence and machine learning in financial services in November 2017, with subsequent thematic reviews and follow-up publications including a 2022 report examining AI governance and risk management frameworks. The FSB, as the international body tasked with monitoring and making recommendations about the global financial system, identified AI and ML adoption in financial services as a potential source of systemic risk warranting coordinated international attention. The guidance addresses how AI/ML tools are used across financial intermediation, credit underwriting, market trading, regulatory compliance, and risk management. The FSB highlights concerns including model opacity, data dependency, third-party concentration risk stemming from reliance on a small number of cloud and data providers, and the potential for correlated behavior across institutions using similar algorithms. The FSB does not issue binding regulations but its recommendations carry significant weight in shaping the policy agendas of member jurisdictions including the G20 nations. National supervisors and standard-setting bodies routinely reference FSB outputs when developing their own domestic AI governance frameworks for financial institutions. The FSB collaborates closely with the Basel Committee on Banking Supervision (BCBS), the International Organization of Securities Commissions (IOSCO), and the Committee on Payments and Market Infrastructures (CPMI) in developing cross-sectoral guidance.
Key Requirements
- •Financial institutions should maintain robust model risk management frameworks that explicitly address AI and ML models, including validation, monitoring, and documentation.
- •Governance structures must assign clear accountability for AI/ML decisions at senior management and board level.
- •Institutions should assess and mitigate third-party and concentration risks arising from reliance on common cloud service providers and data vendors.
- •Explainability and transparency of AI/ML outputs should be commensurate with the materiality and risk of the application.
- •Supervisors are encouraged to develop AI-specific examination and oversight capabilities.
- •Cross-border regulatory cooperation and information sharing on AI-related risks is recommended.
- •Institutions should monitor AI/ML systems for emergent systemic risk patterns, including herding behavior and correlated model outputs.
What Your Organization Must Do
- →Appoint a named senior executive (Chief Risk Officer or equivalent) as the accountable owner for AI/ML governance, and ensure board-level oversight is documented in committee charters and meeting minutes with no later than the next governance review cycle.
- →Expand the existing model risk management framework to explicitly classify and inventory all AI/ML models, assigning each a materiality tier that determines validation frequency, documentation depth, and explainability requirements.
- →Conduct a third-party concentration risk assessment covering all cloud service providers and data vendors supporting AI/ML operations, identifying single points of failure where two or more critical models share the same provider, and develop contingency plans for high-concentration exposures.
- →Implement ongoing monitoring controls for correlated model behavior across the institution's AI/ML portfolio, including regular stress testing for herding or synchronized output scenarios that could contribute to systemic risk.
- →Align internal AI governance documentation and reporting to FSB terminology and expectations so that outputs are readily usable when domestic regulators in G20 jurisdictions conduct AI-specific examinations referencing FSB guidance.
- →Establish a regulatory horizon-scanning process, assigning responsibility to the compliance function, to track how BCBS, IOSCO, and CPMI translate FSB recommendations into binding domestic rules, and update internal policies within 90 days of any relevant domestic rule finalization.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Is the FSB AI in Finance guidance legally binding on financial institutions?
- No. The FSB does not issue binding regulations. Its guidance carries significant influence because G20 member jurisdictions and standard-setting bodies such as BCBS and IOSCO routinely use FSB outputs as the basis for developing binding domestic rules, so non-compliance with FSB expectations often signals future regulatory exposure.
- Which types of financial institutions does the FSB AI guidance cover?
- The guidance applies broadly across financial services, including banks, insurance companies, asset managers, broker-dealers, payment service providers, and financial market infrastructure operators. Technology and data vendors supplying AI services to these institutions are also within scope of the third-party and concentration risk considerations.
- What does the FSB require regarding third-party concentration risk in AI deployments?
- Institutions are expected to assess concentration risk arising from shared reliance on a small number of cloud providers and data vendors. Where multiple critical AI/ML models depend on the same provider, institutions should identify those single points of failure and maintain documented contingency plans to address high-concentration exposures.
- How does the FSB AI guidance relate to BCBS and IOSCO requirements?
- The FSB collaborates directly with BCBS, IOSCO, and CPMI to develop cross-sectoral AI guidance. FSB recommendations typically precede and inform binding rules issued by those bodies, so institutions should monitor how each standard-setter translates FSB expectations into enforceable domestic requirements within their sector.
- What level of explainability does the FSB expect for AI and ML models in financial services?
- The FSB takes a proportionality approach: explainability and transparency requirements should be commensurate with the materiality and risk of the specific application. Higher-risk uses such as credit underwriting or systemic risk management warrant greater interpretability standards than lower-stakes operational tools.
- Does the FSB AI guidance address systemic risk from institutions using similar algorithms?
- Yes. The FSB specifically identifies correlated model behavior and herding risk as systemic concerns when multiple institutions deploy similar algorithms. Institutions are expected to monitor their AI/ML portfolios for synchronized outputs and conduct stress testing scenarios that simulate the market-wide impact of correlated model decisions.
