AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

VoluntaryFrameworkUSHigh riskLimited risk

Treasury Department AI Risk Management Framework for Financial Services

Issued by

US Department of the Treasury

liveEffective 2026-02-01TREAS-AIRMFVerified April 2026

The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.

Applies To

Large enterpriseAI developerAI deployer

Overview

The Treasury Department AI Risk Management Framework for Financial Services operationalizes the NIST AI RMF's four core functions (Govern, Map, Measure, Manage) through 230 enumerated control objectives specifically designed for financial services contexts. The framework addresses the full AI model lifecycle, from data sourcing and training through deployment, monitoring, and decommissioning, and establishes expectations for model documentation, validation, and ongoing performance review. A dedicated section on identity resolution addresses AI-driven customer due diligence and fraud detection systems, reflecting heightened concern over model-driven decisions in regulated activities such as credit underwriting and anti-money laundering screening. The framework integrates directly with SOC 2 Type II audit criteria and the NIST Cybersecurity Framework, enabling institutions to consolidate AI risk controls within existing compliance programs rather than building parallel structures. Treasury has indicated that examination staff will reference these control objectives during supervisory reviews, effectively elevating the framework to a supervisory expectation even absent formal rulemaking. Institutions are expected to conduct gap assessments against the 230 control objectives and document remediation timelines as part of their AI risk governance programs.

Key Requirements

  • Implement governance structures covering all 230 operational control objectives across the AI model lifecycle, including design, development, validation, deployment, monitoring, and retirement
  • Maintain model documentation and validation records sufficient to demonstrate alignment with NIST AI RMF Govern, Map, Measure, and Manage functions
  • Establish data governance controls addressing data provenance, quality, lineage, and access management for all AI training and inference data
  • Integrate AI risk controls with existing SOC 2 and NIST Cybersecurity Framework compliance programs and document linkages between control sets
  • Apply heightened controls to identity resolution and regulated-decision AI systems, including credit underwriting, fraud detection, and AML screening models
  • Conduct and document gap assessments against the 230 control objectives with remediation timelines available for examination by Treasury supervisory staff

What Your Organization Must Do

  • Assign a named AI Risk Owner (Chief Risk Officer or equivalent) by February 1, 2026 to oversee implementation of all 230 control objectives and serve as the primary point of contact for Treasury examination staff.
  • Complete a formal gap assessment against all 230 control objectives within 90 days of the effective date, documenting current-state control coverage, identified gaps, and remediation timelines with milestone dates.
  • Map existing SOC 2 Type II and NIST Cybersecurity Framework controls to corresponding AI RMF control objectives before the first supervisory review cycle, producing a consolidated control linkage register to eliminate duplicative compliance structures.
  • Apply heightened documentation and validation requirements to all identity resolution, credit underwriting, fraud detection, and AML screening models immediately upon the effective date, including independent validation records and performance review schedules.
  • Establish a model inventory with lifecycle tracking from data sourcing through decommissioning, ensuring each entry captures training data provenance, lineage, quality assessments, and access controls sufficient to satisfy examination requests.
  • Schedule recurring model performance reviews at intervals defined in each model's governance plan and ensure remediation timelines from the gap assessment are updated and available to Treasury supervisory staff on demand.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Which financial institutions are subject to the Treasury AI Risk Management Framework?
The framework applies to banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems and are subject to Treasury oversight. Both large enterprises building proprietary AI and those deploying third-party AI systems are covered.
Is the TREAS-AIRMF legally binding or just a guidance document?
The framework is not a formal rule, but Treasury has stated that examination staff will reference its 230 control objectives during supervisory reviews. This effectively makes compliance a supervisory expectation, meaning gaps can trigger findings during examinations even without formal rulemaking.
What are the 230 control objectives in the TREAS-AIRMF and where do they come from?
The 230 control objectives operationalize the four core functions of the NIST AI RMF (Govern, Map, Measure, Manage) into discrete, audit-ready controls tailored to financial services. They span the full model lifecycle from data sourcing and training through deployment, monitoring, and decommissioning.
How does the TREAS-AIRMF interact with existing SOC 2 and NIST Cybersecurity Framework obligations?
The framework is designed to integrate directly with SOC 2 Type II audit criteria and the NIST Cybersecurity Framework so institutions can consolidate AI risk controls within existing compliance programs. Institutions are expected to produce a control linkage register mapping current controls to the new AI RMF objectives rather than building a parallel structure.
What heightened requirements apply to credit underwriting and AML screening models under the TREAS-AIRMF?
AI systems used for identity resolution, credit underwriting, fraud detection, and AML screening are subject to stricter documentation, independent validation, and ongoing performance review requirements. These heightened controls must be in place from the February 1, 2026 effective date and must be available to examiners on request.
What should a financial institution do in the first 90 days after the TREAS-AIRMF effective date?
Institutions should complete a formal gap assessment against all 230 control objectives within 90 days of the February 1, 2026 effective date, documenting current control coverage, identified gaps, and remediation timelines. A named AI Risk Owner such as a Chief Risk Officer should also be designated by the effective date to oversee implementation and liaise with Treasury examination staff.