Dissecting Tort Liability for AI-Driven Technologies in Surgery
I had the pleasure discussing my research at the intersection of law, health, and technology at the M-EPLI Talk on 5 November organised by Professors Daniel On and Kate O’Reilly. In the talk, I focused on medical liability for the use of AI-driven technology in medicine. And now in my role as a M-EPLI blogger, I provide below a summary of the Talk’s main themes.
AI in Medicine: The Future is Now
I began by introducing the current landscape for AI-driven technologies in healthcare. This includes current technologies on the market in both the US and EU that improve access to diagnostic medicine, for example, AI systems that can diagnose diabetic retinopathy and read chest x-rays without human intervention. I also discussed AI systems that show future promise for improving disease prevention, diagnosis, prognosis, and treatment, and highlighted the CLASSICA project, which works towards using AI to improve intraoperative cancer classification for colorectal surgeons.
The Black-Box Paradox: The Potential For More Reliable Bit Less Transparent AI
Next, I provided an overview of the basic characteristics of different AI systems that might drive these new technologies, describing “white-box” systems as those that use transparent architecture, like simple decision trees, and “black-box” systems as those that use opaque architecture, like deep-learning algorithms. The crucial difference is that humans can understand the algorithmic reasoning underlying the AI’s output (sometimes referred to as a decision or recommendation) in white-box systems, but not in black-box systems. The complexity of black-box systems, however, is often precisely what makes them more beneficial than white-box systems in some scenarios. An additional challenge of black-box systems in medicine is that human physicians are not always able to independently assess the quality of the AI’s output in scenarios where the AI provides (and is designed precisely to provide) a new piece of information that was previously available and could not be independently discovered by the physician. In these cases, although “explainable” AI systems might provide some explanation for the output, it is important for users to recognise that this is a post-hoc explanation and may not be the true reasoning underlying the AI’s output.

Challenges to Current Liability Framework
I emphasised that while AI-driven technology can improve both access to medicine and patient outcomes, it also poses challenges to the current tort liability framework. There are two main avenues for tort liability in the case of an AI-caused injury. First, fault-based tort liability can hold physicians, hospitals, and manufacturers responsible if their behavior in creating, implementing, and/or using the AI was negligent (i.e. unreasonable under the circumstances). Second, strict product liability can hold manufacturers (and others in the supply chain) responsible if the AI is considered defective. I foreshadowed the difficulty of determining whether an AI system is defective by posing the question: “Is a medical AI system defective just because it is a black-box or just because it is not 100% accurate?” Leaving this for the audience to ponder, I focused the rest of the talk on fault-based liability for physicians who use and rely on AI systems that have entered the market following regulatory review.
The Medical Standard of Care and AI
Using a comparison of medical liability in the United States and Germany, I described the standard of care applicable to physicians as a requirement to treat patients in accordance with standards generally accepted in the relevant medical field. Falling below this standard means that the physician was negligent and would be liable for injuries caused by this negligence under fault-based tort liability law. Crucially, the standard of care for physicians is dynamic and flexible, and can consider “many factors, including a doctor's specialty, the resources available, and the advances of the medical profession at the time.”
New technologies impact the standard of care, and although there is no hard and fast rule for how quickly new technology are integrated in the standard, courts generally recognize that the standard of care does not require the best available technology and that there may be more than one treatment option that meets the standard of care. However, if the physician uses technology that is outdated and obsolete, this will likely breach the legal standard of care.
The Future of Medical Liability Law
To conclude, I emphasized that the integration of AI-driven technologies in medical practice will require the current medical liability framework to adapt as new questions mount in response to changing standards of care and AI’s ability to conduct medical activities that were once reserved for human physicians. The European Union has been proactive in attempting to address these questions, and the recently adopted AI Act and Product Liability Directive are steps in the right direction. There are likely important lessons that other jurisdictions can learn from the EU on this front. Yet, questions still remain about how to best tackle these liability challenges, with limited legal personhood for AI as a potential approach.
M.N. Duffourc
Mindy is an Assistant Professor of Private Law and a member of the Law and Tech Lab. She researches in the area of comparative health law and technology.

-
Clarifying the Legal Status of Online Logistics Platforms in Europe
Our recent study (with Prof. Wouter Verheyen University of Antwerp) explores the legal uncertainties associated with the rise of online logistics platforms, which optimise the flow of goods through digital procurement mechanisms but lack a clear regulatory framework akin to international transport...
-
The Anti-SLAPP Directive as a Roadmap for SLAPP Targets and the Obstacles Along the Way
To uphold the right to freedom of expression and information, it is essential that ‘public watchdogs’, including journalists, academics, bloggers, human rights activists, and NGOs can expose and highlight social issues, such as corruption and violations of fundamental rights. However, there has been...
-
The Withdrawal of the AI Liability Directive: A Critical Reflection on AI Liability in the EU
On September 28, 2022, the European Commission proposed the AI Liability Directive (AILD) as part of its strategy to create a unified regulatory framework for AI technologies in Europe. The Product Liability Directive (PLD), proposed at the same time, has since garnered acceptance and was adopted...