ROBUST and reliable: UM part of 10-year AI research programme
Professor of Explainable Artificial Intelligence Nava Tintarev will be a co-investigator and chair for the integration of humanities and social sciences in ROBUST, a consortium applying for an NWO grant with a total budget of 95M (25M from NWO) to carry out long term research into reliable artificial intelligence (AI).
“The project explores the potential of artificial intelligence to help tackle the big societal problems,” explains Tintarev. “At the same time, it tries to take this research in the right direction. That means making sure AI is trustworthy. In line with the EU’s sustainable development goals we aim at guaranteeing accuracy, reliability, repeatability, resilience and safety.”
ROBUST will be a collaboration of 21 knowledge institutes, 23 companies and 10 societal organisations. Created following a call for a consortium proposal for 10 years by the Netherlands Organisation for Scientific Research (NWO), the programme is coordinated by the University of Amsterdam and will also receive funding from the AiNed National Growth Fund Investment Program.
AI: solutions, talent, impact
The plan is for the research to take place in 17 new ICAI (Innovation Center for Artificial Intelligence) labs, all of them involving different universities and companies, usually with about five PhD students and some more senior developers involved. “This to make sure it doesn’t stop at the research stage but becomes some kind of product or usable concept down the line. This setup is one of the best examples of public-private partnerships that I know of.”
“Since it will run for ten years, ROBUST will attract and develop talent: young people who will make a difference down the line when it comes to trustworthy AI.” However, it goes beyond technical solutions to everything spanning sustainable technology and renewable energy to personalised healthcare and balanced news provision: “This isn’t just technology for technology’s sake – we actively involve social scientists and the humanities to keep us on our toes a bit, to keep us thinking about the broader societal impact.”
Nava Tintarev was appointed Full Professor and Chair of Explainable Artificial Intelligence at UM’s Faculty of Science and Engineering (FSE) in October 2020. She is embedded in FSE’s Department of Data Science and Knowledge Engineering (DKE). She is also a Visiting Professor in the Software Technology department at TU Delft.
90s pop and other societal challenges
Tintarev, who will be ROBUST’s chair for the integration of humanities and social sciences, studied computer science but also psychology during her undergraduate degree. “I was always thinking about how to make technology that is genuinely useful for people.” Her PhD research was on recommender algorithms for e.g., Netflix or Spotify. “They figure out your taste based on your past consumption; they’re reasonably sophisticated and can help you discover new content – but they can also put you in a filter bubble. So, if you’re really into 90s pop, it will offer you more 90s pop, which, in turn, might condition your taste.”
That is not to overplay the dangers of 90s pop, of course. But analogous dynamics in news-based decision-making in a democracy are worrisome. “That’s when it hit me: much as I love cool tech, I had to help solve the problem that I help cause.” Computational solutions to fair and balanced news provision aside, one’s assumptions prior to even taking to the drawing board could have massive societal implications. “Which democratic theory do you start from? Should all voices be heard equally or should the majority opinion be the most prominent? Those are all things you have to think about beforehand.”
Show me the workings
Tintarev’s research revolves around explaining the decisions of AI systems. “The data these systems are trained on might not be correctly recorded, or the model might have been trained on data that was historically correct but is now outdated or inadvertently encodes biases. For example, the word `doctor’, when translated from one language to another, might change gender – that’s not necessarily malicious or explicit bias; it’s just that historically, there were more male doctors.”
She cites Amazon’s hiring algorithm as another example. “Because it was trained on mostly male successful candidates, it wasn’t identifying women as viable candidates. These systems are sensational at picking up patterns, like the fact that successful candidates tended to have a cluster of hobbies in common, but they can’t necessarily apply common sense to it and ‘realise’ that that’s a correlation based on other factors.”
Not a magical black box
Tintarev tries to provide decision makers with some sort of insight into what is actually happening. “People want more than a magical black box that spits out a result: this is the person to hire, this is the article to read, etc. What is this based on? Can I trust it?” With deep learning, things become even trickier. “If you’re just dumping all the data you have about jobseekers, the algorithm might identify that there’s an ideal shoe size for a successful candidate.”
“We need more transparency, paired with user control. If we understand how decisions come about, AI can become a decision partner. If it discloses its own relative limitations, we can trust or distrust it for the right reasons.” The contracts guaranteeing the accuracy, reliability, repeatability, resilience, and safety of AI systems that ROBUST tries to develop could be a big step towards harnessing the power and managing the challenge that is AI.