aigovernance.com logo
aigovernance.com

Global AI Regulation & Framework Directory

← Directory

OMB Memorandum M-26-04: Increasing Public Trust in AI Through Unbiased AI Principles

M-26-04 · U.S. Office of Management and Budget

OMB Memorandum M-26-04 establishes unbiased AI principles that federal agencies must follow when deploying AI systems that interact with or affect the public. It applies to executive branch agencies procuring, developing, or operating AI tools in federal programs. Agencies are required to implement safeguards against algorithmic bias and to promote transparency and accountability in AI-driven federal decision-making.

Overview

M-26-04 is a directive from the U.S. Office of Management and Budget instructing federal executive branch agencies to adopt a defined set of principles aimed at reducing bias and increasing public trust in government-operated AI systems. The memorandum builds on prior OMB guidance on AI governance and aligns with broader federal AI policy objectives, including responsible AI deployment across high-impact federal programs. Key provisions address requirements for bias testing, documentation of AI system design choices, and mechanisms for public-facing explainability where AI informs consequential decisions. Agencies are expected to integrate these principles into their AI procurement and operational lifecycle processes. Compliance is primarily monitored through agency self-reporting and oversight by OMB, with coordination expected from agency Chief AI Officers and Chief Information Officers. The memorandum does not establish independent enforcement penalties but carries the directive authority typical of OMB guidance binding on covered federal entities.

Key Requirements

  • Federal agencies must assess AI systems for potential bias prior to deployment and document findings
  • Agencies must implement bias mitigation measures for AI systems used in consequential public-facing decisions
  • Chief AI Officers or designated officials must attest to compliance with unbiased AI principles on a defined reporting cycle
  • Agencies must maintain transparency documentation describing how AI systems are designed to avoid discriminatory outcomes
  • Procurement of third-party AI tools must include contractual provisions requiring vendors to support bias auditing and testing
  • Agencies must establish or update public-facing disclosures where AI systems materially influence federal benefit, service, or enforcement determinations

Who It Affects

Public sectorAI developerAI deployer

Effective Date

2026-01-01