Explainable and Reliable Artificial Intelligence (ERAI)
Two of the major challenges for Artificial Intelligence are to provide ‘explanations’ for recommendations made by intelligent systems, and guarantees of their ‘reliability’. Explanations are important to help people involved understand the reasons why a recommendation was made. Reliability is important when decisions concern people’s safety or profoundly affect their lives.
Recent high-profile successes in Machine Learning have mostly been achieved using deep neural networks, which yield ‘black-box’ input-output mappings that can be challenging to explain to users. Especially in the medical, military and legal fields, black-box machine-learning techniques are unacceptable, since decisions may have a profound impact on peoples’ lives. As reported in recent news, AI may amplify the biases that already pervade our society without strict ethical standards, unbiased data collection, and algorithmic bias considerations.
Research focus and application
Research within the ERAI theme Investigates different ways to make intelligent systems better explainable and more reliable. Some of the research foci are:
- Analyzing whether the input-output relations of a system can provide high-level correlation-based explanations of the black-box AI system.
- Logic-based systems can provide explanations and are able to reason with (legal) regulations that should be adhered. Integrating logic-based approaches with machine learning approaches is one possible way to realize explainable artificial intelligence, and is an important challenge for the near future.
- Learning explainable models instead of learning a mapping may improve explainability with minor or no decrease in the quality of predictions/decisions/recommendations.
- Improving the understanding of deep neural networks; especially with respect to the influence of input and intermediate layers on the outputs, information flow through the network, detecting changes that can make the system collapse and determining the tolerance of the system.
- Using machine learning and causal inference to explain and understand the relationship between variables in high-dimensional and complex data.
- Developing model-checking tools for properties of safety-critical engineering systems and medical interventions.
- Making reliable predictions with confidence bounds on the error.
Reliable Artificial Intelligence in practice |

Scope
- Explainable AI
- Reliable AI
- Machine Learning
- Deep Learning
- Causal Inference
- Reasoning and Argumentation
- Software and systems verification
The ERAI website is under construction: ERAI was previously embedded in the Robots, Agents and Interaction (RAI) group.
Go to RAI website
Researchers
Highlighted publications
- Mehrkanoon, S. (2019). Deep neural-kernel blocks. Neural Networks, 116, 46-55. https://doi.org/10.1016/j.neunet.2019.03.011More information about this publication
- Gammerman, A., Vovk, V., Luo, Z., Smirnov, E., & Peeters, R. (Eds.) (2018). Proceedings of the 7th Symposium on Conformal and Probabilistic Prediction and Applications, COPA 2018, 11-13 June 2018, Maastricht, The Netherlands. Proceedings of Machine Learning Research.More information about this publication
- Spanakis, G., Weiss, G., Boh, B., Lemmens, L., & Roefs, A. (2017). Machine learning techniques in eating behavior e-coaching: Balancing between generalization and personalization. Personal and Ubiquitous Computing, 21(4), 645-659. https://doi.org/10.1007/s00779-017-1022-4More information about this publication
- Seiler, C. (2017). Bayesian Statistics in Computational Anatomy. In Statistical Shape and Deformation Analysis (pp. 193-214). Elsevier.More information about this publication
- Houbraken, M., Sun, C., Smirnov, E., & Driessens, K. (2017). Discovering Hidden Course Requirements and Student Competences from Grade Data. In Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization (pp. 147-152). ACM. UMAP '17 https://doi.org/10.1145/3099023.3099034More information about this publication
- Roos, N. (2017). Learning-based diagnosis and repair. In Benelux Artificial Intelligence Conference (BNAIC) (pp. 2-16)More information about this publication
- Pieter Collins, "Model checking dynamical systems", Nieuw archief voor Wiskunde, 5/17(3), pp 214-220, Sept 2016.
More information about this publication
Software tools
Ariadne - A C++ library for formal verification of cyber-physical systems, using reachability analysis for nonlinear hybrid automata http://www.ariadne-cps.org/