Treasury Department AI Risk Management Framework for Financial Services
TREAS-AIRMF · US Department of the Treasury
The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.
Overview
The Treasury Department AI Risk Management Framework for Financial Services operationalizes the NIST AI RMF's four core functions (Govern, Map, Measure, Manage) through 230 enumerated control objectives specifically designed for financial services contexts. The framework addresses the full AI model lifecycle, from data sourcing and training through deployment, monitoring, and decommissioning, and establishes expectations for model documentation, validation, and ongoing performance review. A dedicated section on identity resolution addresses AI-driven customer due diligence and fraud detection systems, reflecting heightened concern over model-driven decisions in regulated activities such as credit underwriting and anti-money laundering screening. The framework integrates directly with SOC 2 Type II audit criteria and the NIST Cybersecurity Framework, enabling institutions to consolidate AI risk controls within existing compliance programs rather than building parallel structures. Treasury has indicated that examination staff will reference these control objectives during supervisory reviews, effectively elevating the framework to a supervisory expectation even absent formal rulemaking. Institutions are expected to conduct gap assessments against the 230 control objectives and document remediation timelines as part of their AI risk governance programs.
Key Requirements
- •Implement governance structures covering all 230 operational control objectives across the AI model lifecycle, including design, development, validation, deployment, monitoring, and retirement
- •Maintain model documentation and validation records sufficient to demonstrate alignment with NIST AI RMF Govern, Map, Measure, and Manage functions
- •Establish data governance controls addressing data provenance, quality, lineage, and access management for all AI training and inference data
- •Integrate AI risk controls with existing SOC 2 and NIST Cybersecurity Framework compliance programs and document linkages between control sets
- •Apply heightened controls to identity resolution and regulated-decision AI systems, including credit underwriting, fraud detection, and AML screening models
- •Conduct and document gap assessments against the 230 control objectives with remediation timelines available for examination by Treasury supervisory staff
Who It Affects
Effective Date
2026-02-01
