Meet Prof. Nava Tintarev, full professor of Explainable Artificial Intelligence

Prof. Nava Tintarev joined Maastricht University in October of last year. She focuses on Explainable Artificial Intelligence, which has an important societal component to it: “My take on things is much more user-centered than what you see in a lot of computer science departments.”

You may not know what a recommender system is, but you have almost certainly encountered one. “It’s the type of automated systems used by companies like Amazon and Spotify, which look at your previous profile and try to suggest something in the future”, says Prof. Tintarev.

As a recently appointed full professor in Explainable Artificial Intelligence, Nava Tintarev is a leading expert in human-computer interaction. Her work focuses on artificially intelligent systems that give advice. ‘Advice’ is to be taken broadly here: beyond recommending what to buy, see or hear next, it can for instance also refer to how search engines prioritize search results for specific users.

Explaining the unexplained

Explainable Artificial Intelligence

Most artificially intelligent systems are notorious for their black box-like nature: they tend to make predictions based on previous data, but won’t reveal how they arrived at their conclusions. Where a person might say ‘I recommend you look into cat supplies, because you searched the internet for pet shelters and cat gifs all of last week’, an algorithm’s reasoning can be much more sophisticated – and entirely opaque.

That lack of transparency comes at a price.

What if an algorithm is biased, but used for important things like your news consumption – thereby informing your opinions about disputed topics like face masks and vaccinations? And what if a computer’s biases only amplify yours, but you don’t know that it’s happening?

The field of Explainable Artificial Intelligence devises methods to generate insights into why AI recommends or takes particular actions. Explanations may for instance be presented in natural language or through using intelligent interactive interfaces. Such explanations are widely considered a key requirement for the responsible use of artificial intelligence in society.

Recommender systems

Back to recommender systems. Since they pop up in all sorts of settings, working on them takes on varied contexts as well. “I do a lot of work on recommender systems,” says Prof. Tintarev, “for instance in the music domain. Maybe things like the weather, your mood, and activity level change the kind of music you enjoy. The amount of interactions and explanations that you could need in these settings may also differ. You can adapt the recommender system and the explanations they generate to these kinds of things.”

“Right now, my team is working on explaining automated group recommendations. These will often require a compromise. If someone is not getting their first choice, how can we use explanations to keep the group happy and smooth those rough edges?”

Opening Pandora’s box

Explainable Artificial Intelligence

But while Prof. Tintarev initially mentions tourist and music recommendations, smoothing edges through explanations can also tackle important societal goals. Recommender systems have been under scrutiny for the risk of promoting so-called ‘filter bubbles’: online spaces where users are served content that aligns with their viewpoints, whether through Youtube, Facebook, or a different platform.

Prof. Tintarev: “I’m concerned about the fact that we are becoming more polarized. That we aren’t exposed to viewpoints that differ from our own – even when we want to be."

"The goal is not necessarily to agree with those other viewpoints, but to be aware that they exist. How do we create interaction in a way that helps people make informed decisions? Many of my current projects, also in collaboration with industry, examine online information and diversity of viewpoints. In a project co-funded by IBM, we are for example looking into search results. Does it matter which results are in the list? Or how they are ordered and displayed? How does this influence user behavior and attitudes?” The end goal, she stresses, is a broad awareness of their information consumption.

Not your average computer scientist

“My vision of Explainable Artificial Intelligence is one that is human understandable”, prof. Tintarev concludes. “We don’t only need to be able to automatically generate explanations, we also need to see when they help people make better informed decisions. That requires empirical work, like evaluations with users. It also requires understanding the circumstances: when are explanations needed and useful? What do we need to change and how should we adapt the explanations?”

“My take on things is much more user-centered than what you see in a lot of computer science departments. It’s half psychology, half computer science.” Needless to say, we’re excited to have Prof. Tintarev on our team.

Also read

  • Lee Bouwman, a vascular surgeon and endowed professor of Clinical Engineering, specialises in the implementation of groundbreaking healthcare technologies. The key to success, he says, lies in the collaboration between engineers and clinicians. This approach has already resulted in a range of...

  • Researchers from across the world have mapped the genetic relationships of the majority of flowering plant genera. Maastricht University helped with this massive effort, which completes the evolutionary tree of life of plants like never before. The famous scientific journal Nature published their...

  • On April 19, during her inaugural lecture, Anna Wilbik explained how we can squeeze out the whole potential of data to the last drop.