EU AI Liability Directive
AILD · European Parliament and Council of the European Union
A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.
Overview
The AI Liability Directive (AILD) was introduced by the European Commission in September 2022 as a companion instrument to the EU AI Act. Where the AI Act established ex ante product safety and conformity requirements, the AILD was designed to address ex post civil liability, specifically, the challenge claimants face when harm results from opaque, automated decision-making systems. The directive proposed two central mechanisms: first, national courts would be empowered to order defendants to disclose evidence and documentation about high-risk AI systems; second, a rebuttable presumption of causation would apply when a defendant fails to comply with disclosure orders or where the claimant demonstrates that non-compliance with an applicable duty of care made the harm likely. The directive was intended to apply across member states through minimum harmonisation, leaving national procedural law otherwise intact. Negotiations stalled over disagreements regarding scope, the definition of fault, and the interaction with the Product Liability Directive (PLD), which was being revised simultaneously. In February 2025, the European Commission formally withdrew the proposal as part of a broader regulatory simplification initiative under the Competitiveness Compass. The PLD revision, which was adopted and does extend some coverage to AI-enabled products, partially fills the gap. Enterprises should monitor whether a revised or successor instrument is tabled under the current Commission mandate. Member states may also introduce national AI liability rules in the interim, creating fragmented exposure.
Key Requirements
- •Disclosure obligation: Courts could order defendants to disclose relevant evidence about high-risk AI systems where claimants demonstrate a plausible case of harm.
- •Rebuttable presumption of causation: Where disclosure is refused or the defendant has breached a relevant duty of care, courts would presume a causal link between the breach and the harm — subject to rebuttal by the defendant.
- •Fault-based framework: Liability was to remain fault-based; the directive did not introduce strict liability for AI systems (that question was addressed in the parallel PLD revision).
- •Interaction with EU AI Act: The duty of care triggering the presumption was linked to obligations under the EU AI Act, including conformity assessments, logging, and human oversight requirements for high-risk systems.
- •Minimum harmonisation: Member states would have retained freedom to adopt more protective national rules.
- •Applicability to providers and deployers: Both developers and deploying entities could face liability, consistent with the dual-actor model in the EU AI Act.