AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

20 entries

Not sure where to start? Answer 3 questions and get a tailored compliance action plan.

What applies to me? →
Must ComplyRegulationUS

California AI Transparency Act

California's AI Transparency Act (SB 942) requires developers of generative AI systems that reach defined usage thresholds to provide AI detection tools and disclosure mechanisms so users and consumers can identify AI-generated content, establishing baseline transparency obligations for covered AI providers operating in or targeting California.

Must ComplyRegulationUSHigh risk

California Health Care Services AI Act Disclosure Requirements

California law requires healthcare providers that use generative AI to communicate with patients to disclose that the communication was AI-generated and to provide clear instructions for reaching a human representative. The requirement applies to covered health care service providers operating in California. It is intended to ensure patients are not misled about the nature of the entity they are interacting with when seeking health information or services.

Must ComplyRegulationAPAC/China

China Algorithm Recommendation Regulations

China's first dedicated regulation governing algorithmic recommendation systems, imposing transparency, user-control, and content-moderation obligations on providers of internet-based recommendation services operating in or targeting users in China.

Must ComplyRegulationAPAC/China

China Deep Synthesis Regulations

China's Provisions on the Administration of Deep Synthesis Internet Information Services regulate the development, distribution, and use of deep synthesis technologies-including AI-generated or AI-manipulated text, images, audio, video, and virtual humans-by service providers and users operating in or targeting China. They impose labelling, registration, security-assessment, and content-moderation obligations.

Must ComplyRegulationAPAC/China

Interim Measures for the Management of Generative Artificial Intelligence Services

China's Interim Measures for the Management of Generative AI Services impose mandatory obligations on providers of generative AI products and services to the Chinese public, covering training data governance, content safety, algorithmic transparency, and registration requirements.

Must ComplyRegulationUS

Colorado AI Act SB205

Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.

Must ComplyRegulationEUUnacceptable risk

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

Must ComplyRegulationEU

EU AI Liability Directive

A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.

Must ComplyRegulationEU

EU Cyber Resilience Act

The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.

Must ComplyRegulationEU

EU Data Act

The EU Data Act establishes harmonised rules on access to and use of data generated by connected products and related services across the EU, addressing both personal and non-personal data. It creates new obligations for data holders to share data with users and third parties, and sets conditions for public sector bodies to access privately held data in exceptional circumstances.

Must ComplyRegulationEU

EU Data Governance Act

The EU Data Governance Act establishes a regulatory framework for data intermediaries, data altruism organisations, and the re-use of public sector data protected by third-party rights. It creates new governance structures to facilitate trusted data sharing across sectors and member states, supporting the broader European Data Strategy.

Must ComplyRegulationEU

EU Digital Operational Resilience Act

The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.

Must ComplyRegulationEU

EU Digital Services Act – AI and Algorithmic Accountability Provisions

The Digital Services Act imposes transparency, accountability, and risk-management obligations on online intermediaries with respect to algorithmic recommender systems, targeted advertising, and systemic risks. Obligations scale with platform size, with the most stringent requirements applying to VLOPs and VLOSEs.

Must ComplyRegulationUS

Illinois Biometric Information Privacy Act – AI Provisions

The Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14, restricts the collection, storage, use, and disclosure of biometric identifiers and information, with direct implications for AI systems that process facial geometry, voiceprints, iris scans, and similar biometric data. BIPA is among the most litigated biometric privacy statutes in the United States.

Must ComplyRegulationAPAC/Korea

Korea AI Basic Act

South Korea's foundational AI governance statute establishing risk-based obligations for AI developers and deployers, with heightened requirements for high-impact AI systems and a national AI safety infrastructure.

Must ComplyRegulationUS

New York City Local Law 144 of 2021 – Automated Employment Decision Tools

Requires employers and employment agencies using automated employment decision tools (AEDTs) in New York City hiring or promotion decisions to conduct annual bias audits, publish audit results, and notify candidates prior to use.

Must ComplyRegulationAPAC/Singapore

Singapore Personal Data Protection Act – AI and Automated Decision-Making Amendments

The Personal Data Protection (Amendment) Act 2020 and accompanying PDPC advisory guidelines address the use of personal data in AI and automated decision-making, introducing mandatory data breach notification, expanded accountability obligations, and guidance on responsible AI deployment under Singapore's Model AI Governance Framework.

Must ComplyRegulationUSHigh risk

Texas Responsible AI Governance Act (House Bill 149)

The Texas Responsible AI Governance Act (HB 149) is a state-level law enacted in June 2026 that establishes governance requirements for the deployment of artificial intelligence systems within Texas. It applies to entities deploying AI systems that affect Texas residents or are operated within the state. The Act imposes obligations around transparency, accountability, and risk management for covered AI deployments.

Must ComplyRegulationUS

Executive Order: Eliminating State Law Obstruction of National Artificial Intelligence Policy

This Executive Order directs federal agencies to identify and challenge state AI laws that conflict with national AI policy, aiming to establish a unified federal regulatory floor. It establishes an AI Litigation Task Force and requires the Secretary of Commerce to evaluate onerous state AI regulations within 90 days. The FCC and FTC are directed to develop federal reporting standards and policies designed to preempt conflicting state-level requirements.

Must ComplyRegulationUSHigh risk

Utah Artificial Intelligence Policy Act (SB 149)

Utah SB 149 requires businesses and individuals in regulated occupations to disclose when consumers are interacting with an AI system rather than a human. It establishes liability under existing consumer protection laws for entities that fail to make required disclosures or that use AI in a deceptive manner. The law also creates an Office of Artificial Intelligence Policy within the Utah Department of Commerce to oversee AI policy development and coordinate regulatory guidance.