PhD defence Luca Mira Heising

Supervisors: Prof. Dr. Frank Verhaegen, Prof. Dr. Maria Jacobs, Prof. Dr. Carol Ou

Co-supervisor: Dr. Cecile Wolfs

Keywords: Explainable Artificial Intelligence, Implementation gap, AI implementation, Radiotherapy

 

"From Black Box to White Coat: Bridging the Ai Implementation Gap"

 

Artificial intelligence (AI) is playing an increasingly important role in healthcare, partly due to staff shortages and a growing demand for care. Although AI solutions exist for every step in the radiotherapy workflow, implementation in clinical practice remains challenging. This thesis examines the gap between AI development and its implementation, addressing what is required for careful and safe AI implementation in radiotherapy. A major barrier is the “black box” nature of AI: due to its self-learning capabilities, it is often unclear why a model produces a specific outcome. For this reason, explainable AI (XAI) has received considerable attention in scientific research. XAI methods aim to make AI outcomes more transparent, for example through visualizations such as heatmaps. While promising, these methods also entail risks. Heatmaps can be difficult to interpret and may introduce cognitive bias, potentially leading healthcare professionals to draw incorrect conclusions; an undesirable effect in a high-risk context such as healthcare. A promising alternative is the use of counterfactual explanations, which clarify AI outcomes through “what-if” scenarios. In this thesis, this approach was examined by systematically varying tumor size in a detection model and analyzing when the model’s output changed. The thesis concludes that current XAI methods do not yet meet the safety requirements within radiotherapy. Nevertheless, they show potential, and further research is needed to make black-box AI methods more transparent.

Click here for the live stream.

Also read