AI Regulation in the European Union
The European Union has enacted the most comprehensive AI regulatory framework in the world. The EU AI Act — effective August 2024, with enforcement phasing through 2026 and 2027 — introduces risk-based obligations for AI developers and deployers across four tiers: prohibited uses, high-risk systems with strict conformity requirements, limited-risk systems with transparency obligations, and minimal-risk AI with no specific mandates.
Alongside the AI Act, organizations deploying AI in EU markets must navigate a layered stack of intersecting regulations. GDPR governs personal data used to train and operate AI systems. The Digital Operational Resilience Act (DORA) sets requirements for AI in financial services. The Data Act, Digital Services Act, and Cyber Resilience Act each add obligations relevant to specific AI use cases. The EU AI Office, established in 2024, is the central authority responsible for regulating general-purpose AI models with systemic risk.
For compliance teams, the EU's approach means that no single framework is sufficient. A high-risk AI system in healthcare or hiring must satisfy the AI Act's conformity assessment requirements, maintain GDPR-compliant data practices, and meet sector-specific rules simultaneously. Organizations using AI in critical infrastructure, biometrics, or access to essential services face the most demanding compliance burden.
Key themes
- 1.Risk-tiered obligations — prohibited, high-risk, limited risk, minimal risk
- 2.Prohibited AI practices (social scoring, real-time biometrics in public spaces)
- 3.General-purpose AI model governance via the EU AI Office
- 4.Intersection with GDPR, DSA, DORA, and the Data Act
Regulatory frameworks and guidance(12)
EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)
The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.
EU Parliament Trilogue Negotiations on AI Act Compliance Deadline Extensions
This pending legislative process involves trilogue negotiations between the European Parliament, Council, and Commission aimed at extending key EU AI Act compliance deadlines. Reported targets would push the deadline for high-risk AI systems to December 2027 and the deadline for product-embedded AI to August 2028. The process responds to implementation difficulties encountered by regulated entities following the AI Act's initial phased obligations that took effect in February and August 2025.
EU AI Liability Directive
A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.
EU AI Office Framework
The institutional and regulatory framework establishing the European AI Office as the central EU body responsible for supervising general-purpose AI models, enforcing the EU AI Act at the supranational level, and coordinating AI governance across EU member states.
EU Code of Practice on Marking and Labelling of AI-Generated Content
The European Commission published a first draft of a voluntary code of practice establishing common standards for marking and labelling AI-generated content across the EU. It targets organisations that produce, distribute, or deploy AI systems capable of generating text, images, audio, or video. The code aims to improve transparency and consumer awareness by requiring identifiable disclosure when content is AI-generated.
EU Cyber Resilience Act
The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.
EU Data Act
The EU Data Act establishes harmonised rules on access to and use of data generated by connected products and related services across the EU, addressing both personal and non-personal data. It creates new obligations for data holders to share data with users and third parties, and sets conditions for public sector bodies to access privately held data in exceptional circumstances.
EU Data Governance Act
The EU Data Governance Act establishes a regulatory framework for data intermediaries, data altruism organisations, and the re-use of public sector data protected by third-party rights. It creates new governance structures to facilitate trusted data sharing across sectors and member states, supporting the broader European Data Strategy.
European Commission Digital Omnibus on AI Regulation Proposal
The European Commission's Digital Omnibus proposal, published in November 2025, seeks to amend the EU AI Act's implementation timeline in response to administrative delays and the absence of harmonized technical standards. It targets all organizations subject to the AI Act, with particular relief provisions for small and medium enterprises. If adopted, it would postpone key high-risk AI obligations, streamline documentation requirements for SMEs, and strengthen the AI Office's supervisory role over general-purpose AI models.
EU Digital Operational Resilience Act
The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.
EU Digital Services Act – AI and Algorithmic Accountability Provisions
The Digital Services Act imposes transparency, accountability, and risk-management obligations on online intermediaries with respect to algorithmic recommender systems, targeted advertising, and systemic risks. Obligations scale with platform size, with the most stringent requirements applying to VLOPs and VLOSEs.
EU General-Purpose AI Model Training Data Public Summary Template
The European Commission published a standardized template for providers of general-purpose AI models to use when publicly disclosing summaries of their training data. It supports compliance with the transparency obligations for general-purpose AI models established under the EU AI Act. Providers are expected to follow the template structure when meeting their disclosure requirements under that regulation.
