The Withdrawal of the AI Liability Directive: A Critical Reflection on AI Liability in the EU
On September 28, 2022, the European Commission proposed the AI Liability Directive (AILD) as part of its strategy to create a unified regulatory framework for AI technologies in Europe. The Product Liability Directive (PLD), proposed at the same time, has since garnered acceptance and was adopted into EU law in October 2024. The AILD Directive suffered a very different fate, resulting in its recent withdrawal.
The AILD’s withdrawal comes amongst a wave of political uncertainty surrounding AI governance following the 2024 United States election and the resulting alliance with and pressure from big tech. But truth be told: The directive was always on shaky ground in Europe, and the EC’s decision to abandon it is not all that surprising.
What was the AI Liability Directive’s goal?
Put simply, the AILD wanted victims of AI-caused harm to have the same avenues for redress as victims who suffered any other type of harm. Generally, two types of liability are relevant for victims of AI-caused harm: strict product liability and fault-based liability. The AILD was meant to complement the revised PLD, which dealt with the topic of strict product liability, to provide a feasible avenue for victims under fault-based liability law.
What the AI Liability Directive did and didn’t do
The AILD recognised that emerging technologies like AI challenged traditional fault-based liability law as it struggled to accommodate AI autonomy, opacity, and unpredictability. The AILD introduced some harmonised procedural mechanisms, such as presumptions of causation and rules for obtaining evidence, to make it easier for claimants to prove their cases. These rules left the core elements of fault-based tort liability—fault, causation, and damages—to the discretion of EU Member States’ national laws. This meant that the Directive did little to harmonise the substantive rules governing liability for AI-caused harm. Without any provisions substantively addressing the concept of fault in cases involving AI, the AILD ignored a fundamental challenge in AI cases: identifying fault by a legally responsible party. Without fault, there is no fault-based liability. For example, consider black-box medical AI systems, which are increasingly used in healthcare for their accuracy but are notoriously difficult to interpret. When such systems cause harm, it is often unclear who—if anyone—is at fault.
What now?
While the withdrawal of the AILD may seem like a setback, it is important to remember that EU Member States already have well-established fault-based liability systems. These systems, while imperfect, are capable of adapting to new challenges posed by AI technologies. National courts will be called upon to interpret and apply liability laws in innovative ways.
Fault-based liability is a deeply contextual area of law that requires careful consideration of local legal traditions, judicial practices, and cultural norms and the application of broad legal principles like fairness, reasonableness, and proportionality. If EU fault-based liability law is to find a way toward harmonisation on the issue of AI-caused harm, a bottom-up approach rooted in member state legal systems and practical experience may ultimately prove more effective than top-down regulatory interventions.
Ultimately, the goal should be to strike a balance between providing victims with meaningful redress and fostering innovation of beneficial AI technologies. This will require a nuanced, context-sensitive approach that respects the complexities of liability law while addressing the unique challenges posed by AI. And while the AILD may have fallen short, the journey toward a fair and effective liability framework for AI is far from over.
M.N. Duffourc
Mindy is an Assistant Professor of Private Law and a member of the Law and Tech Lab. She researches in the area of comparative health law and technology.

-
Clarifying the Legal Status of Online Logistics Platforms in Europe
Our recent study (with Prof. Wouter Verheyen University of Antwerp) explores the legal uncertainties associated with the rise of online logistics platforms, which optimise the flow of goods through digital procurement mechanisms but lack a clear regulatory framework akin to international transport...
-
The Anti-SLAPP Directive as a Roadmap for SLAPP Targets and the Obstacles Along the Way
To uphold the right to freedom of expression and information, it is essential that ‘public watchdogs’, including journalists, academics, bloggers, human rights activists, and NGOs can expose and highlight social issues, such as corruption and violations of fundamental rights. However, there has been...
-
Dissecting Tort Liability for AI-Driven Technologies in Surgery
I had the pleasure discussing my research at the intersection of law, health, and technology at the M-EPLI Talk on 5 November organised by Professors Daniel On and Kate O’Reilly. In the talk, I focused on medical liability for the use of AI-driven technology in medicine. And now in my role as a M...