Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence
Issued by
Executive Office of the President of the United States
Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.
Applies To
Overview
Executive Order 14110, signed by President Biden on October 30, 2023, represented the most expansive federal action on artificial intelligence governance in U.S. history at the time of issuance. The order invoked the Defense Production Act to require developers of large-scale AI foundation models-specifically those trained using computing power above defined thresholds-to report safety test results and other critical information to the federal government prior to public deployment. The EO directed the National Institute of Standards and Technology (NIST) to develop guidelines and standards for AI safety evaluations, red-teaming, and watermarking of AI-generated content. It also assigned responsibilities across more than a dozen federal agencies, including the Department of Homeland Security, the Department of Energy, and the Department of Commerce, to assess AI-related risks within their respective domains. The order addressed algorithmic discrimination, privacy risks from AI-enabled data aggregation, AI's implications for critical infrastructure protection, biosecurity, and the federal AI workforce. For enterprises, the EO created immediate obligations for covered model developers and signaled a forthcoming regulatory environment that affected enterprise procurement, vendor due diligence, and internal AI development programs. Although EO 14110 was revoked by President Trump's Executive Order 14179 in January 2025, multiple agency rules and guidance documents initiated under EO 14110 remain in effect or are in advanced stages of rulemaking, and compliance teams should assess which downstream obligations survive the revocation.
Key Requirements
- •Developers of dual-use foundation models trained above defined compute thresholds (initially 10^26 FLOPs) must report to the federal government on safety testing results and red-team findings prior to deployment.
- •NIST directed to develop a companion AI Safety Institute and publish guidelines for evaluating, red-teaming, and watermarking AI systems.
- •Federal agencies required to designate Chief AI Officers and develop agency-specific AI governance policies aligned with OMB guidance.
- •Commerce Department directed to issue guidance on authentication and watermarking of AI-generated synthetic content.
- •DHS directed to assess AI risks to critical infrastructure sectors and establish an AI Safety and Security Board.
- •HHS, CFPB, FTC, DOJ, and other regulators directed to evaluate existing authorities and issue guidance addressing AI-related harms in their domains.
- •Federal contractors and vendors supplying AI tools to the government subject to evolving procurement standards stemming from the EO.
- •Privacy and civil liberties protections required as part of federal AI deployment review processes.
What Your Organization Must Do
- →Determine immediately whether your organization's foundation models meet or approach the 10^26 FLOPs compute threshold and, if so, establish a pre-deployment reporting process to submit safety test results and red-team findings to the federal government before any public release.
- →Audit all AI products and services sold or licensed to federal agencies against evolving procurement standards originating from EO 14110, working with your government contracts team to identify contract clauses that remain operative despite the January 2025 revocation of the EO itself.
- →Map which downstream agency rules and guidance documents initiated under EO 14110 remain in effect or in active rulemaking, specifically tracking outputs from NIST AI Safety Institute, OMB, CFPB, FTC, HHS, and DOJ, and assign a responsible compliance owner to monitor each workstream.
- →If your organization operates in critical infrastructure sectors, engage your DHS regulatory liaison to assess AI risk requirements flowing from the AI Safety and Security Board mandate and confirm whether sector-specific guidance imposes new obligations on your AI systems.
- →For life sciences and biotechnology operations using AI in research with dual-use potential, conduct a biosecurity risk review in coordination with legal and biosafety officers to identify any surviving federal reporting or access-control requirements.
- →Ensure vendor due diligence and third-party AI procurement checklists are updated to require disclosure of compute training scale, safety testing documentation, and synthetic content watermarking practices, consistent with Commerce Department guidance, before onboarding new AI suppliers.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Was EO 14110 revoked and do its requirements still apply?
- President Trump revoked EO 14110 via Executive Order 14179 in January 2025. However, multiple agency rules, guidance documents, and rulemaking processes initiated under EO 14110 remain in effect or are ongoing. Compliance teams should audit which downstream obligations survive the revocation rather than treating all requirements as nullified.
- What is the compute threshold that triggers the mandatory safety reporting obligation under EO 14110?
- The order set an initial threshold of 10^26 floating point operations (FLOPs) of training compute. Developers of dual-use foundation models trained at or above that level were required to report safety test results and red-team findings to the federal government before public deployment, invoking the Defense Production Act as legal authority.
- Does EO 14110 apply to private companies or only federal agencies?
- Both. Private developers of large-scale foundation models meeting the compute threshold face pre-deployment reporting obligations. Federal agencies, contractors, and technology vendors supplying AI to the government are also subject to requirements, including Chief AI Officer designations and evolving federal procurement standards.
- How does EO 14110 interact with NIST AI RMF compliance obligations?
- EO 14110 directed NIST to develop AI safety evaluation guidelines and establish the AI Safety Institute, building on the existing NIST AI Risk Management Framework. Organizations already aligned to the AI RMF have a head start, but EO 14110 added specific mandates around red-teaming, watermarking, and safety testing that go beyond the voluntary RMF.
- What obligations does EO 14110 create for life sciences and biotech firms?
- The order specifically addressed biosecurity risks from AI used in dual-use research, directing agencies to assess and mitigate threats. Life sciences and biotechnology firms using AI in research with dual-use potential should conduct a biosecurity risk review and confirm whether surviving federal reporting or access-control requirements apply to their programs.
- Do federal contractor AI obligations from EO 14110 survive the January 2025 revocation?
- Potentially yes. Contract clauses and procurement standards incorporated into active federal contracts before the revocation may remain operative under those agreements. Vendors supplying AI tools to federal agencies should work with their government contracts counsel to identify which specific contractual obligations remain in force.
