EU AI Liability Directive
Issued by
European Parliament and Council of the European Union
A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.
Applies To
Overview
The AI Liability Directive (AILD) was introduced by the European Commission in September 2022 as a companion instrument to the EU AI Act. Where the AI Act established ex ante product safety and conformity requirements, the AILD was designed to address ex post civil liability, specifically, the challenge claimants face when harm results from opaque, automated decision-making systems. The directive proposed two central mechanisms: first, national courts would be empowered to order defendants to disclose evidence and documentation about high-risk AI systems; second, a rebuttable presumption of causation would apply when a defendant fails to comply with disclosure orders or where the claimant demonstrates that non-compliance with an applicable duty of care made the harm likely. The directive was intended to apply across member states through minimum harmonisation, leaving national procedural law otherwise intact. Negotiations stalled over disagreements regarding scope, the definition of fault, and the interaction with the Product Liability Directive (PLD), which was being revised simultaneously. In February 2025, the European Commission formally withdrew the proposal as part of a broader regulatory simplification initiative under the Competitiveness Compass. The PLD revision, which was adopted and does extend some coverage to AI-enabled products, partially fills the gap. Enterprises should monitor whether a revised or successor instrument is tabled under the current Commission mandate. Member states may also introduce national AI liability rules in the interim, creating fragmented exposure.
Key Requirements
- •Disclosure obligation: Courts could order defendants to disclose relevant evidence about high-risk AI systems where claimants demonstrate a plausible case of harm.
- •Rebuttable presumption of causation: Where disclosure is refused or the defendant has breached a relevant duty of care, courts would presume a causal link between the breach and the harm — subject to rebuttal by the defendant.
- •Fault-based framework: Liability was to remain fault-based; the directive did not introduce strict liability for AI systems (that question was addressed in the parallel PLD revision).
- •Interaction with EU AI Act: The duty of care triggering the presumption was linked to obligations under the EU AI Act, including conformity assessments, logging, and human oversight requirements for high-risk systems.
- •Minimum harmonisation: Member states would have retained freedom to adopt more protective national rules.
- •Applicability to providers and deployers: Both developers and deploying entities could face liability, consistent with the dual-actor model in the EU AI Act.
What Your Organization Must Do
- →Monitor the European Commission's legislative agenda under the current mandate for any successor or revised AI liability instrument, assigning a designated legal or regulatory affairs lead to track developments at least quarterly.
- →Audit existing documentation and evidence management practices for high-risk AI systems classified under Annex III of the EU AI Act, ensuring conformity assessment records, logs, and human oversight documentation are complete and retrievable in anticipation of future disclosure obligations.
- →Review the adopted Product Liability Directive revision now, as it partially covers AI-enabled products and creates enforceable obligations; engage external EU counsel to assess gaps in your current product liability posture by Q3 2025.
- →Map all EU member states where your AI systems are deployed and instruct local counsel to identify any national AI liability rules being developed or enacted, since member-state fragmentation is likely following the AILD withdrawal.
- →Brief the Chief Legal Officer and Risk Committee on residual litigation exposure for high-risk AI deployments, noting that claimant-friendly causation arguments may still be advanced under existing national tort frameworks even without the AILD.
- →Align ongoing EU AI Act compliance programs with liability risk management by confirming that conformity, logging, and oversight obligations for high-risk systems are met on schedule, since these duties formed the fault trigger under the withdrawn directive and remain relevant to any successor framework.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Was the EU AI Liability Directive ever enacted into law?
- No. The European Commission formally withdrew the proposal in February 2025 after negotiations broke down over disagreements on scope, fault definitions, and interaction with the revised Product Liability Directive. It never progressed to adoption and has no binding legal effect.
- Why was the AI Liability Directive withdrawn in 2025?
- The Commission withdrew it as part of a broader regulatory simplification push under the Competitiveness Compass. Political agreement stalled over the directive's scope, its definition of fault, and how it would interact with the simultaneously revised Product Liability Directive.
- Does the revised Product Liability Directive cover AI systems now that the AILD is withdrawn?
- Partially. The adopted Product Liability Directive revision extends coverage to AI-enabled products and software, but it does not replicate the AILD's disclosure orders or causation presumptions for services and standalone AI systems. Gaps remain, particularly for AI deployed as a service.
- Could companies still face AI-related civil liability in the EU without the AILD?
- Yes. Claimants can still pursue claims under existing national tort frameworks, and causation arguments may be advanced using general civil procedure rules. Member states may also enact their own AI liability rules now that the AILD has lapsed, creating fragmented exposure across jurisdictions.
- How did the AILD's causation presumption interact with EU AI Act compliance obligations?
- The rebuttable presumption was triggered when a defendant breached a duty of care linked to EU AI Act obligations, such as conformity assessments, logging, and human oversight requirements for high-risk systems. Failure to meet those obligations would have made causation presumed unless the defendant could rebut it.
- Should compliance teams still prepare documentation as if the AILD were in force?
- Yes, for practical reasons. Conformity records, logs, and oversight documentation for high-risk AI systems remain required under the EU AI Act regardless. A successor directive is possible under the current Commission mandate, and complete documentation reduces litigation exposure under both national tort law and the revised Product Liability Directive.
