AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

Directory

AI Regulation in the United States

The United States takes a sector-by-sector, agency-by-agency approach to AI regulation rather than a unified federal law. The Biden administration's Executive Order 14110 (2023) directed agencies to develop AI standards and prompted NIST's AI Risk Management Framework, but was revoked in January 2025. Today, federal AI oversight is fragmented: the FTC uses consumer protection authority to challenge AI-related deception and bias, the SEC requires disclosure of material AI risks, the FDA regulates AI as a medical device, and OSHA is examining AI in workplace safety.

State-level regulation is accelerating and producing the most concrete compliance obligations. Colorado's AI Act (SB205, effective February 2026) is the first comprehensive state AI law, covering high-risk AI systems that make consequential decisions about consumers in employment, credit, housing, education, and healthcare. California's AI Transparency Act requires disclosure of AI-generated content. Illinois' BIPA has been applied to AI systems processing biometric data. New York City's Local Law 144 mandates bias audits for AI tools used in employment decisions.

The NIST AI Risk Management Framework — structured around four functions: Govern, Map, Measure, and Manage — is the most widely adopted voluntary standard for US organizations and is referenced in federal procurement requirements and sector-specific guidance. For organizations without the structure of an EU compliance obligation, NIST AI RMF alignment provides a practical baseline for demonstrating governance maturity.

Key themes

  • 1.No single federal AI law — agency-by-agency and state-by-state compliance
  • 2.NIST AI RMF as the de facto voluntary standard
  • 3.Colorado AI Act (SB205) and active state legislation
  • 4.SEC, FTC, and FDA enforcement using existing authority

Regulatory frameworks and guidance(25)

Regulation

California AI Transparency Act

California's AI Transparency Act (SB 942) requires developers of generative AI systems that reach defined usage thresholds to provide AI detection tools and disclosure mechanisms so users and consumers can identify AI-generated content, establishing baseline transparency obligations for covered AI providers operating in or targeting California.

Regulation

California Health Care Services AI Act Disclosure Requirements

California law requires healthcare providers that use generative AI to communicate with patients to disclose that the communication was AI-generated and to provide clear instructions for reaching a human representative. The requirement applies to covered health care service providers operating in California. It is intended to ensure patients are not misled about the nature of the entity they are interacting with when seeking health information or services.

Regulation

California Transparency in Frontier AI Act

The California Transparency in Frontier AI Act is a state-level regulation that took effect on January 1, 2026, imposing transparency obligations on developers and deployers of frontier generative AI systems operating in California. It requires covered entities to disclose certain information about how their AI systems function, what data they use, and how outputs are generated. The Act reflects California's approach to filling regulatory gaps ahead of any federal AI transparency framework.

Regulation

Colorado AI Act SB205

Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.

PendingPending

Commerce Department Evaluation of State AI Laws

The US Department of Commerce is required, within 90 days of the December 11, 2025 Executive Order on National AI policy, to publish an evaluation identifying state AI laws that conflict with federal policy objectives. The evaluation focuses on state laws that compel AI systems to alter truthful outputs or mandate disclosures that may implicate First Amendment protections. Laws identified in the evaluation may be referred to the AI Litigation Task Force for potential federal preemption action.

Framework

Executive Order 14318: Accelerating Federal Permitting of Data Center Infrastructure

Executive Order 14318, signed by President Trump, directs federal agencies to streamline and accelerate permitting processes for data center infrastructure in support of domestic AI development. It applies to federal agencies with authority over land use, environmental review, and energy approvals relevant to data center siting and construction. The order aims to reduce regulatory delays that the administration identified as barriers to AI infrastructure growth in the United States.

Framework

Executive Order 14319: Preventing Woke AI in the Federal Government

Executive Order 14319, signed by President Trump, directs federal agencies to avoid AI systems that incorporate ideological bias or viewpoint-discriminatory outputs in government operations. It applies to federal agencies procuring, deploying, or contracting AI tools for government use. The order aligns with the administration's broader deregulatory posture toward AI and rescinds or modifies prior guidance perceived as imposing politically or ideologically motivated constraints on AI development.

Framework

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence

Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.

PendingPending

Federal Communications Commission AI Model Reporting and Disclosure Proceeding

The Federal Communications Commission is initiating a formal proceeding to evaluate whether to adopt a federal reporting and disclosure standard for AI models. The proceeding is triggered by and timed to follow the Commerce Department's evaluation of existing state-level AI laws. If adopted, a resulting federal standard could preempt conflicting state AI disclosure and reporting requirements.

Guideline

FDA AI/ML Software as Medical Device Guidance

FDA's action plan and associated guidance documents establish a regulatory framework for AI/ML-based Software as a Medical Device (SaMD), introducing a total product lifecycle (TPLC) approach, predetermined change control plans, and transparency and monitoring requirements for adaptive AI/ML algorithms used in clinical settings.

Guideline

FTC AI Enforcement Policy

The FTC's AI enforcement posture, articulated through policy statements, guidance documents, and enforcement actions, applies existing consumer protection and competition statutes to AI-related harms including deceptive AI claims, discriminatory automated decisions, and unfair data practices underpinning AI systems.

PendingPending

H.R.8094 - AI Foundation Model Transparency Act of 2026

Introduced on March 26, 2026, by a bipartisan group of U.S. lawmakers, H.R.8094 would require developers of large AI foundation models to publicly disclose information about training data, model design, known limitations, risks, and evaluation methods. The bill targets developers of large-scale AI models and imposes transparency obligations without directly regulating how those models may be used or deployed. Its stated objective is to enable public scrutiny of foundation model characteristics without placing operational restrictions on AI development.

Regulation

Illinois Biometric Information Privacy Act – AI Provisions

The Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14, restricts the collection, storage, use, and disclosure of biometric identifiers and information, with direct implications for AI systems that process facial geometry, voiceprints, iris scans, and similar biometric data. BIPA is among the most litigated biometric privacy statutes in the United States.

Framework

NIST AI 600-1 Generative AI Profile

A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.

Framework

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

Regulation

New York City Local Law 144 of 2021 – Automated Employment Decision Tools

Requires employers and employment agencies using automated employment decision tools (AEDTs) in New York City hiring or promotion decisions to conduct annual bias audits, publish audit results, and notify candidates prior to use.

Guideline

OMB Memorandum M-26-04: Increasing Public Trust in AI Through Unbiased AI Principles

OMB Memorandum M-26-04 establishes unbiased AI principles that federal agencies must follow when deploying AI systems that interact with or affect the public. It applies to executive branch agencies procuring, developing, or operating AI tools in federal programs. Agencies are required to implement safeguards against algorithmic bias and to promote transparency and accountability in AI-driven federal decision-making.

Guideline

SEC AI Governance Guidance

The SEC has issued rules, guidance, and proposed rulemaking addressing AI governance obligations for registered investment advisers, broker-dealers, and public companies, focusing on conflicts of interest in predictive data analytics, AI-related disclosures in securities filings, and examination priorities targeting algorithmic systems.

Regulation

Texas Responsible AI Governance Act (House Bill 149)

The Texas Responsible AI Governance Act (HB 149) is a state-level law enacted in June 2026 that establishes governance requirements for the deployment of artificial intelligence systems within Texas. It applies to entities deploying AI systems that affect Texas residents or are operated within the state. The Act imposes obligations around transparency, accountability, and risk management for covered AI deployments.

Regulation

Executive Order: Eliminating State Law Obstruction of National Artificial Intelligence Policy

This Executive Order directs federal agencies to identify and challenge state AI laws that conflict with national AI policy, aiming to establish a unified federal regulatory floor. It establishes an AI Litigation Task Force and requires the Secretary of Commerce to evaluate onerous state AI regulations within 90 days. The FCC and FTC are directed to develop federal reporting standards and policies designed to preempt conflicting state-level requirements.

Framework

Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence

Executive Order 14179 directs federal agencies to develop a new national AI Action Plan prioritizing US dominance in AI development and deployment. It applies to federal agencies and shapes the regulatory environment for private-sector AI developers and deployers operating in the United States. The order explicitly revokes prior AI-related executive orders focused on safety guardrails, reorienting federal AI policy toward deregulation and competitiveness.

Framework

Ensuring a National Policy Framework for Artificial Intelligence

This Executive Order, issued on December 11, 2025, directs federal agencies to establish a unified national AI policy and authorizes preemption of state AI laws that conflict with that policy. It applies to federal agencies, the Federal Trade Commission, and by extension any entity subject to state AI regulations that may be challenged under the order. Key requirements include a Commerce Department evaluation of state AI laws within 90 days, an FTC policy statement on unfair and deceptive AI practices within 90 days, and the creation of an AI Litigation Task Force.

Enforcement

U.S. Federal Court Ruling on Attorney-Client Privilege and AI Chatbot Communications (Rakoff, S.D.N.Y. 2026)

A Manhattan federal judge ruled that communications between a user and an AI chatbot platform, such as Anthropic's Claude, do not qualify for attorney-client privilege protection. The ruling compelled former GWG Holdings CEO Bradley Heppner to produce 31 AI-generated legal documents in a securities fraud proceeding. The decision signals that AI-assisted legal research and drafting conducted via third-party platforms may be subject to compelled disclosure in U.S. federal litigation.

Framework

Treasury Department AI Risk Management Framework for Financial Services

The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.

Regulation

Utah Artificial Intelligence Policy Act (SB 149)

Utah SB 149 requires businesses and individuals in regulated occupations to disclose when consumers are interacting with an AI system rather than a human. It establishes liability under existing consumer protection laws for entities that fail to make required disclosures or that use AI in a deceptive manner. The law also creates an Office of Artificial Intelligence Policy within the Utah Department of Commerce to oversee AI policy development and coordinate regulatory guidance.