aigovernance.com

Global AI Regulation & Framework Directory

← Directory

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence

EO 14110 · Executive Office of the President of the United States

Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.

Overview

Executive Order 14110, signed by President Biden on October 30, 2023, represented the most expansive federal action on artificial intelligence governance in U.S. history at the time of issuance. The order invoked the Defense Production Act to require developers of large-scale AI foundation models-specifically those trained using computing power above defined thresholds-to report safety test results and other critical information to the federal government prior to public deployment. The EO directed the National Institute of Standards and Technology (NIST) to develop guidelines and standards for AI safety evaluations, red-teaming, and watermarking of AI-generated content. It also assigned responsibilities across more than a dozen federal agencies, including the Department of Homeland Security, the Department of Energy, and the Department of Commerce, to assess AI-related risks within their respective domains. The order addressed algorithmic discrimination, privacy risks from AI-enabled data aggregation, AI's implications for critical infrastructure protection, biosecurity, and the federal AI workforce. For enterprises, the EO created immediate obligations for covered model developers and signaled a forthcoming regulatory environment that affected enterprise procurement, vendor due diligence, and internal AI development programs. Although EO 14110 was revoked by President Trump's Executive Order 14179 in January 2025, multiple agency rules and guidance documents initiated under EO 14110 remain in effect or are in advanced stages of rulemaking, and compliance teams should assess which downstream obligations survive the revocation.

Key Requirements

  • Developers of dual-use foundation models trained above defined compute thresholds (initially 10^26 FLOPs) must report to the federal government on safety testing results and red-team findings prior to deployment.
  • NIST directed to develop a companion AI Safety Institute and publish guidelines for evaluating, red-teaming, and watermarking AI systems.
  • Federal agencies required to designate Chief AI Officers and develop agency-specific AI governance policies aligned with OMB guidance.
  • Commerce Department directed to issue guidance on authentication and watermarking of AI-generated synthetic content.
  • DHS directed to assess AI risks to critical infrastructure sectors and establish an AI Safety and Security Board.
  • HHS, CFPB, FTC, DOJ, and other regulators directed to evaluate existing authorities and issue guidance addressing AI-related harms in their domains.
  • Federal contractors and vendors supplying AI tools to the government subject to evolving procurement standards stemming from the EO.
  • Privacy and civil liberties protections required as part of federal AI deployment review processes.

Who It Affects

Developers and operators of large-scale foundation models sold or licensed in the United StatesFederal agencies and departments deploying AI systemsFederal contractors and technology vendors supplying AI-enabled products and services to the U.S. governmentCritical infrastructure operators subject to DHS oversightLife sciences and biotechnology firms using AI in research with dual-use potentialEnterprises in regulated sectors receiving follow-on agency guidance stemming from the EO

Effective Date

2023-10-30

Official source →