The Withdrawal of the AI Liability Directive: A Critical Reflection on AI Liability in the EU

Author: in Law

On September 28, 2022, the European Commission proposed the AI Liability Directive (AILD) as part of its strategy to create a unified regulatory framework for AI technologies in Europe. The Product Liability Directive (PLD), proposed at the same time, has since garnered acceptance and was adopted into EU law in October 2024. The AILD Directive suffered a very different fate, resulting in its recent withdrawal.

The AILD’s withdrawal comes amongst a wave of political uncertainty surrounding AI governance following the 2024 United States election and the resulting alliance with and pressure from big tech. But truth be told: The directive was always on shaky ground in Europe, and the EC’s decision to abandon it is not all that surprising.
 

What was the AI Liability Directive’s goal?

Put simply, the AILD wanted victims of AI-caused harm to have the same avenues for redress as victims who suffered any other type of harm. Generally, two types of liability are relevant for victims of AI-caused harm: strict product liability and fault-based liability. The AILD was meant to complement the revised PLD, which dealt with the topic of strict product liability, to provide a feasible avenue for victims under fault-based liability law.


What the AI Liability Directive did and didn’t do

The AILD recognised that emerging technologies like AI challenged traditional fault-based liability law as it struggled to accommodate AI autonomy, opacity, and unpredictability. The AILD introduced some harmonised procedural mechanisms, such as presumptions of causation and rules for obtaining evidence, to make it easier for claimants to prove their cases. These rules left the core elements of fault-based tort liability—fault, causation, and damages—to the discretion of EU Member States’ national laws. This meant that the Directive did little to harmonise the substantive rules governing liability for AI-caused harm. Without any provisions substantively addressing the concept of fault in cases involving AI, the AILD ignored a fundamental challenge in AI cases: identifying fault by a legally responsible party. Without fault, there is no fault-based liability. For example, consider black-box medical AI systems, which are increasingly used in healthcare for their accuracy but are notoriously difficult to interpret. When such systems cause harm, it is often unclear who—if anyone—is at fault.
 

What now?

While the withdrawal of the AILD may seem like a setback, it is important to remember that EU Member States already have well-established fault-based liability systems. These systems, while imperfect, are capable of adapting to new challenges posed by AI technologies. National courts will be called upon to interpret and apply liability laws in innovative ways.

Fault-based liability is a deeply contextual area of law that requires careful consideration of local legal traditions, judicial practices, and cultural norms and the application of broad legal principles like fairness, reasonableness, and proportionality. If EU fault-based liability law is to find a way toward harmonisation on the issue of AI-caused harm, a bottom-up approach rooted in member state legal systems and practical experience may ultimately prove more effective than top-down regulatory interventions.

Ultimately, the goal should be to strike a balance between providing victims with meaningful redress and fostering innovation of beneficial AI technologies. This will require a nuanced, context-sensitive approach that respects the complexities of liability law while addressing the unique challenges posed by AI. And while the AILD may have fallen short, the journey toward a fair and effective liability framework for AI is far from over.

Tags:

M.N. Duffourc

Mindy is an Assistant Professor of Private Law and a member of the Law and Tech Lab. She researches in the area of comparative health law and technology.