aigovernance.com logo
aigovernance.com

Global AI Regulation & Framework Directory

64 entries
FrameworkUS

America's AI Action Plan

America's AI Action Plan is a White House-directed national strategy to advance United States leadership in artificial intelligence across three pillars: accelerating domestic innovation, building AI infrastructure, and leading in international AI diplomacy and security. It applies broadly to federal agencies and shapes the regulatory environment for private-sector AI developers and deployers operating in the United States. The plan formalizes the Chief Artificial Intelligence Officer Council and rolls back select Biden-era AI oversight requirements to reduce barriers to AI adoption.

White House Office of Science and Technology Policy

FrameworkAPAC/Australia

Australia AI Ethics Framework

A voluntary national framework establishing eight core ethical principles to guide the responsible design, development, and deployment of AI systems by Australian organisations.

Australian Government Department of Industry, Science and Resources (DISR)

GuidelineGlobal

Bletchley Declaration on AI Safety

A landmark multilateral political declaration signed at the inaugural AI Safety Summit, in which 28 governments agreed on the existence of frontier AI risks and committed to collaborative international action on AI safety, evaluation, and information sharing.

Twenty-eight signatory governments including the United States, United Kingdom, European Union, China, Australia, Canada, France, Germany, India, Japan, Republic of Korea, and Saudi Arabia; convened at the UK AI Safety Summit, Bletchley Park

RegulationUS

California AI Transparency Act

California's AI Transparency Act (SB 942) requires developers of generative AI systems that reach defined usage thresholds to provide AI detection tools and disclosure mechanisms so users and consumers can identify AI-generated content, establishing baseline transparency obligations for covered AI providers operating in or targeting California.

California State Legislature; administered by the California Attorney General

RegulationUS

California Health Care Services AI Act Disclosure Requirements

California law requires healthcare providers that use generative AI to communicate with patients to disclose that the communication was AI-generated and to provide clear instructions for reaching a human representative. The requirement applies to covered health care service providers operating in California. It is intended to ensure patients are not misled about the nature of the entity they are interacting with when seeking health information or services.

California State Legislature

RegulationAPAC/China

China Algorithm Recommendation Regulations

China's first dedicated regulation governing algorithmic recommendation systems, imposing transparency, user-control, and content-moderation obligations on providers of internet-based recommendation services operating in or targeting users in China.

Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security (MPS), State Administration of Radio and Television (SART)

RegulationAPAC/China

China Deep Synthesis Regulations

China's Provisions on the Administration of Deep Synthesis Internet Information Services regulate the development, distribution, and use of deep synthesis technologies-including AI-generated or AI-manipulated text, images, audio, video, and virtual humans-by service providers and users operating in or targeting China. They impose labelling, registration, security-assessment, and content-moderation obligations.

Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security (MPS)

RegulationAPAC/China

Interim Measures for the Management of Generative Artificial Intelligence Services

China's Interim Measures for the Management of Generative AI Services impose mandatory obligations on providers of generative AI products and services to the Chinese public, covering training data governance, content safety, algorithmic transparency, and registration requirements.

Cyberspace Administration of China (CAC), in conjunction with the National Development and Reform Commission (NDRC), Ministry of Education, Ministry of Science and Technology (MOST), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security, and National Radio and Television Administration (NRTA)

RegulationUS

Colorado AI Act SB205

Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.

Colorado General Assembly; signed by Governor Jared Polis

FrameworkUS

Executive Order 14318: Accelerating Federal Permitting of Data Center Infrastructure

Executive Order 14318, signed by President Trump, directs federal agencies to streamline and accelerate permitting processes for data center infrastructure in support of domestic AI development. It applies to federal agencies with authority over land use, environmental review, and energy approvals relevant to data center siting and construction. The order aims to reduce regulatory delays that the administration identified as barriers to AI infrastructure growth in the United States.

Executive Office of the President of the United States

FrameworkUS

Executive Order 14319: Preventing Woke AI in the Federal Government

Executive Order 14319, signed by President Trump, directs federal agencies to avoid AI systems that incorporate ideological bias or viewpoint-discriminatory outputs in government operations. It applies to federal agencies procuring, deploying, or contracting AI tools for government use. The order aligns with the administration's broader deregulatory posture toward AI and rescinds or modifies prior guidance perceived as imposing politically or ideologically motivated constraints on AI development.

White House, Executive Office of the President

RegulationEU

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

European Commission

PendingEU

EU Parliament Trilogue Negotiations on AI Act Compliance Deadline Extensions

This pending legislative process involves trilogue negotiations between the European Parliament, Council, and Commission aimed at extending key EU AI Act compliance deadlines. Reported targets would push the deadline for high-risk AI systems to December 2027 and the deadline for product-embedded AI to August 2028. The process responds to implementation difficulties encountered by regulated entities following the AI Act's initial phased obligations that took effect in February and August 2025.

European Parliament

RegulationEU

EU AI Liability Directive

A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.

European Parliament and Council of the European Union

FrameworkEU

EU AI Office Framework

The institutional and regulatory framework establishing the European AI Office as the central EU body responsible for supervising general-purpose AI models, enforcing the EU AI Act at the supranational level, and coordinating AI governance across EU member states.

European AI Office, European Commission

FrameworkEU

EU Code of Practice on Marking and Labelling of AI-Generated Content

The European Commission published a first draft of a voluntary code of practice establishing common standards for marking and labelling AI-generated content across the EU. It targets organisations that produce, distribute, or deploy AI systems capable of generating text, images, audio, or video. The code aims to improve transparency and consumer awareness by requiring identifiable disclosure when content is AI-generated.

European Commission

RegulationEU

EU Cyber Resilience Act

The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.

European Parliament and Council of the European Union

RegulationEU

EU Data Act

The EU Data Act establishes harmonised rules on access to and use of data generated by connected products and related services across the EU, addressing both personal and non-personal data. It creates new obligations for data holders to share data with users and third parties, and sets conditions for public sector bodies to access privately held data in exceptional circumstances.

European Parliament and Council of the European Union

RegulationEU

EU Data Governance Act

The EU Data Governance Act establishes a regulatory framework for data intermediaries, data altruism organisations, and the re-use of public sector data protected by third-party rights. It creates new governance structures to facilitate trusted data sharing across sectors and member states, supporting the broader European Data Strategy.

European Parliament and Council of the European Union

PendingEU

European Commission Digital Omnibus on AI Regulation Proposal

The European Commission's Digital Omnibus proposal, published in November 2025, seeks to amend the EU AI Act's implementation timeline in response to administrative delays and the absence of harmonized technical standards. It targets all organizations subject to the AI Act, with particular relief provisions for small and medium enterprises. If adopted, it would postpone key high-risk AI obligations, streamline documentation requirements for SMEs, and strengthen the AI Office's supervisory role over general-purpose AI models.

European Commission

RegulationEU

EU Digital Operational Resilience Act

The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.

European Parliament and Council of the European Union; supervised jointly by the European Banking Authority (EBA), European Insurance and Occupational Pensions Authority (EIOPA), and European Securities and Markets Authority (ESMA), collectively the European Supervisory Authorities (ESAs)

RegulationEU

EU Digital Services Act – AI and Algorithmic Accountability Provisions

The Digital Services Act imposes transparency, accountability, and risk-management obligations on online intermediaries with respect to algorithmic recommender systems, targeted advertising, and systemic risks. Obligations scale with platform size, with the most stringent requirements applying to VLOPs and VLOSEs.

European Parliament and Council of the European Union; enforced by Digital Services Coordinators (DSCs) in each Member State and by the European Commission for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs)

GuidelineEU

EU General-Purpose AI Model Training Data Public Summary Template

The European Commission published a standardized template for providers of general-purpose AI models to use when publicly disclosing summaries of their training data. It supports compliance with the transparency obligations for general-purpose AI models established under the EU AI Act. Providers are expected to follow the template structure when meeting their disclosure requirements under that regulation.

European Commission

FrameworkUS

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence

Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.

Executive Office of the President of the United States

GuidelineGlobal

FATF AI Anti-Money Laundering Guidance

FATF guidance on the use of AI and machine learning in anti-money laundering, counter-terrorism financing, and proliferation financing compliance, setting expectations for responsible AI adoption in transaction monitoring, customer due diligence, and suspicious activity detection.

Financial Action Task Force (FATF)

PendingUS

Federal Communications Commission AI Model Reporting and Disclosure Proceeding

The Federal Communications Commission is initiating a formal proceeding to evaluate whether to adopt a federal reporting and disclosure standard for AI models. The proceeding is triggered by and timed to follow the Commerce Department's evaluation of existing state-level AI laws. If adopted, a resulting federal standard could preempt conflicting state AI disclosure and reporting requirements.

Federal Communications Commission

GuidelineUS

FDA AI/ML Software as Medical Device Guidance

FDA's action plan and associated guidance documents establish a regulatory framework for AI/ML-based Software as a Medical Device (SaMD), introducing a total product lifecycle (TPLC) approach, predetermined change control plans, and transparency and monitoring requirements for adaptive AI/ML algorithms used in clinical settings.

U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health (CDRH)

GuidelineGlobal

Financial Stability Board AI in Finance

The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.

Financial Stability Board (FSB)

GuidelineUS

FTC AI Enforcement Policy

The FTC's AI enforcement posture, articulated through policy statements, guidance documents, and enforcement actions, applies existing consumer protection and competition statutes to AI-related harms including deceptive AI claims, discriminatory automated decisions, and unfair data practices underpinning AI systems.

Federal Trade Commission (FTC), United States

GuidelineGlobal

G7 Hiroshima AI Code of Conduct

A voluntary international code of conduct issued under the G7 Hiroshima AI Process, setting out eleven guiding principles and corresponding actions for developers and operators of advanced AI systems, with a particular focus on frontier and general-purpose AI models. The code operates as a soft-law commitment mechanism intended to shape responsible AI development globally.

G7 Hiroshima AI Process (endorsed by G7 Leaders; administered through the G7 Digital and Technology Track)

PendingUS

H.R.8094 - AI Foundation Model Transparency Act of 2026

Introduced on March 26, 2026, by a bipartisan group of U.S. lawmakers, H.R.8094 would require developers of large AI foundation models to publicly disclose information about training data, model design, known limitations, risks, and evaluation methods. The bill targets developers of large-scale AI models and imposes transparency obligations without directly regulating how those models may be used or deployed. Its stated objective is to enable public scrutiny of foundation model characteristics without placing operational restrictions on AI development.

U.S. Congress

RegulationUS

Illinois Biometric Information Privacy Act – AI Provisions

The Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14, restricts the collection, storage, use, and disclosure of biometric identifiers and information, with direct implications for AI systems that process facial geometry, voiceprints, iris scans, and similar biometric data. BIPA is among the most litigated biometric privacy statutes in the United States.

Illinois General Assembly; administered and enforced through private right of action and the Illinois Attorney General

FrameworkAPAC/Singapore

IMDA Model AI Governance Framework

A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.

Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Singapore

GuidelineAPAC/India

India AI Governance Framework

MeitY's advisory framework establishes responsible AI principles and interim compliance expectations for platforms deploying AI systems in India, with particular focus on preventing harm, ensuring traceability, and requiring government approval before deploying undertested AI models.

Ministry of Electronics and Information Technology (MeitY), Government of India

FrameworkISO/OECD/UN

ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System

The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee ISO/IEC JTC 1, Subcommittee SC 42

FrameworkISO/OECD/UN

ISO/IEC 23894 AI Risk Management

An international standard providing guidance on how organisations can manage AI-specific risks by integrating AI risk management into enterprise-wide risk management frameworks. ISO/IEC 23894 adapts the ISO 31000 risk management vocabulary and process model to the distinctive characteristics of AI systems, including emergent behaviour, data dependency, opacity, and sociotechnical complexity.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee 1, Subcommittee 42 (JTC 1/SC 42)

FrameworkISO/OECD/UN

ISO/IEC 24028 AI Trustworthiness

ISO/IEC 24028:2020 provides a comprehensive overview of trustworthiness concepts, characteristics, and threats in AI systems, offering a structured technical and organizational framework for assessing and improving the trustworthiness of AI across the system lifecycle.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee JTC 1, Subcommittee SC 42 (Artificial Intelligence)

FrameworkISO/OECD/UN

ISO/IEC 24029 Robustness of Neural Networks

ISO/IEC 24029 is a multi-part international standard providing formal methods and practical assessment approaches for evaluating the robustness of artificial neural networks (ANNs), addressing susceptibility to adversarial inputs, distributional shift, and other failure modes critical to trustworthy AI deployment.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee JTC 1, Subcommittee SC 42 (Artificial Intelligence)

GuidelineAPAC/Japan

Japan AI Guidelines for Business

METI guidelines providing Japanese businesses with practical AI governance standards aligned with the Hiroshima AI Process and international frameworks, addressing risk management, transparency, accountability, and intellectual property considerations across the AI lifecycle.

Ministry of Economy, Trade and Industry (METI), Japan

RegulationAPAC/Korea

Korea AI Basic Act

South Korea's foundational AI governance statute establishing risk-based obligations for AI developers and deployers, with heightened requirements for high-impact AI systems and a national AI safety infrastructure.

National Assembly of the Republic of Korea; Ministry of Science and ICT (MSIT)

GuidelineAPAC/Singapore

Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector

The MAS FEAT Principles establish a voluntary but authoritative framework for financial institutions in Singapore to govern the responsible use of AI and data analytics, structured around four principles: Fairness, Ethics, Accountability, and Transparency.

Monetary Authority of Singapore (MAS)

GuidelineAPAC/Australia

New Zealand Responsible AI Guidance for Business

The New Zealand Government has published comprehensive guidance to help businesses implement responsible AI practices in commercial contexts. It provides a practical framework covering AI governance structures, risk management processes, and accountability expectations. While non-binding, the guidance reflects government expectations for how enterprises operating in New Zealand should develop and deploy AI systems.

New Zealand Government

FrameworkUS

NIST AI 600-1 Generative AI Profile

A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.

National Institute of Standards and Technology (NIST), U.S. Department of Commerce

FrameworkUS

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

National Institute of Standards and Technology (NIST), U.S. Department of Commerce

RegulationUS

New York City Local Law 144 of 2021 – Automated Employment Decision Tools

Requires employers and employment agencies using automated employment decision tools (AEDTs) in New York City hiring or promotion decisions to conduct annual bias audits, publish audit results, and notify candidates prior to use.

New York City Council; administered by the New York City Department of Consumer and Worker Protection (DCWP)

FrameworkISO/OECD/UN

OECD AI Principles

The OECD AI Principles are the first intergovernmental standard on AI, establishing five value-based principles and five recommendations for governments to promote trustworthy AI that respects human rights and democratic values.

Organisation for Economic Co-operation and Development (OECD)

FrameworkISO/OECD/UN

Governing with Artificial Intelligence

This OECD report examines how governments are deploying artificial intelligence in public sector operations, drawing on analysis of real-world use cases across member and partner countries. It finds that 57% of documented government AI applications focus on streamlining public services and 45% on improving administrative decision-making. The report also identifies governance risks including biased training data, insufficient transparency, and institutional overreliance on automated outputs.

Organisation for Economic Co-operation and Development

GuidelineUS

OMB Memorandum M-26-04: Increasing Public Trust in AI Through Unbiased AI Principles

OMB Memorandum M-26-04 establishes unbiased AI principles that federal agencies must follow when deploying AI systems that interact with or affect the public. It applies to executive branch agencies procuring, developing, or operating AI tools in federal programs. Agencies are required to implement safeguards against algorithmic bias and to promote transparency and accountability in AI-driven federal decision-making.

U.S. Office of Management and Budget

GuidelineUS

SEC AI Governance Guidance

The SEC has issued rules, guidance, and proposed rulemaking addressing AI governance obligations for registered investment advisers, broker-dealers, and public companies, focusing on conflicts of interest in predictive data analytics, AI-related disclosures in securities filings, and examination priorities targeting algorithmic systems.

U.S. Securities and Exchange Commission (SEC)

FrameworkGlobal

Singapore Consensus on Global AI Safety Research Priorities

The Singapore Consensus is an international consensus document establishing a coordinated agenda for AI safety research priorities across jurisdictions and institutions. It emerged from a multilateral summit convened by the Singapore Government and reflects agreement among participating governments and organizations on where collaborative safety research efforts should be directed. The document does not impose binding legal obligations but provides a shared reference framework for national AI safety programs and research funding bodies.

Government of Singapore and International Summit Participants

FrameworkAPAC/Singapore

Singapore Global AI Assurance Sandbox

The Singapore Global AI Assurance Sandbox provides a controlled regulatory environment for organizations to test and validate AI systems against safety, reliability, and accountability standards before or during deployment. It is open to enterprises and AI developers seeking structured assurance pathways under Singapore regulatory oversight. Participants operate under defined sandbox conditions that temporarily relax certain compliance obligations in exchange for structured testing commitments and reporting.

Singapore Government (Infocomm Media Development Authority)

RegulationAPAC/Singapore

Singapore Personal Data Protection Act – AI and Automated Decision-Making Amendments

The Personal Data Protection (Amendment) Act 2020 and accompanying PDPC advisory guidelines address the use of personal data in AI and automated decision-making, introducing mandatory data breach notification, expanded accountability obligations, and guidance on responsible AI deployment under Singapore's Model AI Governance Framework.

Parliament of Singapore; administered by the Personal Data Protection Commission (PDPC) under the Ministry of Digital Development and Innovation (MDDI)

RegulationUS

Texas Responsible AI Governance Act (House Bill 149)

The Texas Responsible AI Governance Act (HB 149) is a state-level law enacted in June 2026 that establishes governance requirements for the deployment of artificial intelligence systems within Texas. It applies to entities deploying AI systems that affect Texas residents or are operated within the state. The Act imposes obligations around transparency, accountability, and risk management for covered AI deployments.

Texas Legislature

GuidelineUK

UK AI Growth Lab Regulatory Sandbox - Consultation on Two Models

The UK Department for Science, Innovation and Technology launched a consultation in October 2025 on the AI Growth Lab, a proposed regulatory sandbox enabling companies to test AI innovations under modified regulatory conditions. Two structural models are under consideration: a centrally operated sandbox administered by the government across multiple sectors, and a regulator-operated model in which a designated lead regulator manages each sandbox instance. The initiative is intended to reduce compliance barriers for AI development while maintaining appropriate oversight.

UK Department for Science, Innovation and Technology

FrameworkUK

UK AI Opportunities Action Plan

The UK AI Opportunities Action Plan is a government-issued strategic framework published in January 2025 that sets out the Labour government's agenda for accelerating AI adoption and infrastructure investment across the United Kingdom. It applies to public sector bodies, AI developers, and enterprises operating or investing in AI in the UK. Key commitments include the creation of dedicated AI Growth Zones, expansion of compute infrastructure, and the establishment of a National Data Library to facilitate access to public data for AI development.

UK Labour Government (Department for Science, Innovation and Technology)

PendingUK

UK AI Regulation Framework

The UK AI Regulation Framework is a principles-based, sector-led approach to AI governance that delegates primary regulatory responsibility to existing sector regulators rather than establishing a unified AI-specific regulator. It is currently transitioning toward a more structured legislative footing following the Labour government's AI Opportunities Action Plan published in January 2025.

UK Department for Science, Innovation and Technology (DSIT); AI Safety Institute (AISI); sector regulators including FCA, ICO, CMA, Ofcom, MHRA

GuidelineUK

UK ICO Guidance on Artificial Intelligence and Data Protection

The UK ICO's guidance on AI and data protection establishes how the UK GDPR and Data Protection Act 2018 apply to the design, development, and deployment of AI systems that process personal data.

Information Commissioner's Office (ICO)

FrameworkGlobal

Global Dialogue on AI Governance (UN General Assembly Resolution A/RES/79/325)

The UN General Assembly established the Global Dialogue on AI Governance through Resolution A/RES/79/325 to create an inclusive intergovernmental platform for deliberating on artificial intelligence challenges and priorities. It is open to UN member states and a broad range of stakeholders, including civil society and private sector participants. The submission portal accepts contributions through April 30, 2026, with collected input intended to shape emerging global AI governance priorities.

United Nations General Assembly

GuidelineISO/OECD/UN

UNESCO Recommendation on the Ethics of Artificial Intelligence

The UNESCO Recommendation on the Ethics of Artificial Intelligence is the first global normative instrument on AI ethics adopted by a UN body, providing a comprehensive ethical framework and specific policy recommendations across eleven thematic areas for all 194 UNESCO Member States.

United Nations Educational, Scientific and Cultural Organization (UNESCO)

RegulationUS

Executive Order: Eliminating State Law Obstruction of National Artificial Intelligence Policy

This Executive Order directs federal agencies to identify and challenge state AI laws that conflict with national AI policy, aiming to establish a unified federal regulatory floor. It establishes an AI Litigation Task Force and requires the Secretary of Commerce to evaluate onerous state AI regulations within 90 days. The FCC and FTC are directed to develop federal reporting standards and policies designed to preempt conflicting state-level requirements.

The White House

FrameworkUS

Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence

Executive Order 14179 directs federal agencies to develop a new national AI Action Plan prioritizing US dominance in AI development and deployment. It applies to federal agencies and shapes the regulatory environment for private-sector AI developers and deployers operating in the United States. The order explicitly revokes prior AI-related executive orders focused on safety guardrails, reorienting federal AI policy toward deregulation and competitiveness.

Executive Office of the President of the United States

FrameworkUS

Ensuring a National Policy Framework for Artificial Intelligence

This Executive Order directs federal agencies to establish a unified national AI policy and limit the effect of conflicting state-level AI regulations. It requires the Attorney General to form an AI Litigation Task Force within 30 days and the Secretary of Commerce to evaluate conflicting state AI laws within 90 days. The FCC and FTC are directed to initiate proceedings to set federal standards that may preempt state laws on deceptive AI practices.

Executive Office of the President (White House)

FrameworkUS

Treasury Department AI Risk Management Framework for Financial Services

The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.

US Department of the Treasury

RegulationUS

Utah Artificial Intelligence Policy Act (SB 149)

Utah SB 149 requires businesses and individuals in regulated occupations to disclose when consumers are interacting with an AI system rather than a human. It establishes liability under existing consumer protection laws for entities that fail to make required disclosures or that use AI in a deceptive manner. The law also creates an Office of Artificial Intelligence Policy within the Utah Department of Commerce to oversee AI policy development and coordinate regulatory guidance.

Utah State Legislature