OMB Memorandum M-26-04: Increasing Public Trust in AI Through Unbiased AI Principles
Issued by
U.S. Office of Management and Budget
OMB Memorandum M-26-04 establishes unbiased AI principles that federal agencies must follow when deploying AI systems that interact with or affect the public. It applies to executive branch agencies procuring, developing, or operating AI tools in federal programs. Agencies are required to implement safeguards against algorithmic bias and to promote transparency and accountability in AI-driven federal decision-making.
Applies To
Overview
M-26-04 is a directive from the U.S. Office of Management and Budget instructing federal executive branch agencies to adopt a defined set of principles aimed at reducing bias and increasing public trust in government-operated AI systems. The memorandum builds on prior OMB guidance on AI governance and aligns with broader federal AI policy objectives, including responsible AI deployment across high-impact federal programs. Key provisions address requirements for bias testing, documentation of AI system design choices, and mechanisms for public-facing explainability where AI informs consequential decisions. Agencies are expected to integrate these principles into their AI procurement and operational lifecycle processes. Compliance is primarily monitored through agency self-reporting and oversight by OMB, with coordination expected from agency Chief AI Officers and Chief Information Officers. The memorandum does not establish independent enforcement penalties but carries the directive authority typical of OMB guidance binding on covered federal entities.
Key Requirements
- •Federal agencies must assess AI systems for potential bias prior to deployment and document findings
- •Agencies must implement bias mitigation measures for AI systems used in consequential public-facing decisions
- •Chief AI Officers or designated officials must attest to compliance with unbiased AI principles on a defined reporting cycle
- •Agencies must maintain transparency documentation describing how AI systems are designed to avoid discriminatory outcomes
- •Procurement of third-party AI tools must include contractual provisions requiring vendors to support bias auditing and testing
- •Agencies must establish or update public-facing disclosures where AI systems materially influence federal benefit, service, or enforcement determinations
What Your Organization Must Do
- →Conduct pre-deployment bias assessments for all AI systems that interact with or affect the public before the January 1, 2026 effective date, and document findings in a format accessible to agency Chief AI Officers and Chief Information Officers.
- →Assign the Chief AI Officer or a formally designated official to attest to compliance with unbiased AI principles on the reporting cycle defined by OMB, and establish an internal calendar to ensure attestations are not missed.
- →Implement bias mitigation measures for any AI system used in consequential public-facing decisions, prioritizing high-impact programs involving federal benefits, services, or enforcement, and record mitigation actions in the agency AI inventory.
- →Update AI procurement contracts and solicitations to include explicit vendor obligations for bias auditing and testing support, ensuring these provisions are present in all new or renewed third-party AI tool agreements from January 1, 2026 onward.
- →Create or revise public-facing disclosures to notify affected individuals when AI systems materially influence federal benefit, service, or enforcement determinations, coordinating with agency communications and legal teams to finalize disclosure language before the effective date.
- →Maintain transparency documentation describing AI system design choices and bias-avoidance measures, and store this documentation in a centralized repository that can be retrieved for OMB oversight reviews and agency self-reporting submissions.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Which federal agencies are required to comply with OMB M-26-04?
- M-26-04 applies to all executive branch agencies that procure, develop, or operate AI systems interacting with or affecting the public. This includes agencies running high-impact programs involving federal benefits, services, or enforcement. Independent agencies outside the executive branch are not directly covered by OMB memoranda.
- Does M-26-04 impose penalties on agencies that fail to comply?
- No. M-26-04 does not establish independent enforcement penalties. Compliance is monitored through agency self-reporting and OMB oversight. However, as binding OMB guidance on executive branch entities, non-compliance can carry administrative and reputational consequences within the federal oversight process.
- What role does the Chief AI Officer play under M-26-04?
- The Chief AI Officer, or a formally designated official, must attest to agency compliance with the unbiased AI principles on a reporting cycle defined by OMB. This attestation obligation makes the Chief AI Officer directly accountable for ensuring bias assessments, mitigation measures, and documentation requirements are met.
- What must federal agencies include in AI procurement contracts under M-26-04?
- Agencies must include contractual provisions in third-party AI tool agreements requiring vendors to support bias auditing and testing. These obligations must be present in all new or renewed contracts from the January 1, 2026 effective date onward, making vendor cooperation with bias assessments a procurement requirement.
- How does M-26-04 compare to OMB M-24-10 on AI governance?
- M-26-04 builds on M-24-10, which established broader responsible AI practices across federal agencies. M-26-04 narrows focus specifically to algorithmic bias, transparency, and public trust in AI-driven federal decisions, adding concrete attestation and bias testing requirements that M-24-10 addressed at a higher level.
- When must agencies publish public-facing disclosures about AI use under M-26-04?
- Agencies must create or update public-facing disclosures before the January 1, 2026 effective date for any AI system that materially influences federal benefit, service, or enforcement determinations. The disclosures must notify affected individuals of AI involvement and describe how systems are designed to avoid discriminatory outcomes.
