Algorithm and Blues
From a POLITICO summit to talk shows, from middle schools to university, from popular science books to peer-reviewed journals: UM’s Katleen Gabriels tries to inspire conversations about the moral issues surrounding algorithms – crucially, not just among academics.
Why should we care how algorithms work and how they make decisions? “If Google and Facebook are the window through which you see the world, then you need to at least have some idea of how they work, how they curate information. The information you receive is different from that of someone you might disagree with. It is crucial everyone understands that when they’re using social media – and when they’re trying to have conversations.”
In marketing, ‘ethical’ is commonly used as vague, catch-all attribute that allows you to feel good about liking a brand or product: what you consume there is somehow good, and so are you. In philosophy, the word has a specific meaning: “We make moral decisions every day: what is right and what isn’t. The systematic study of morals is called ethics.”
Moral gossip and crooked wood
Gossip nicely illustrates how moral judgements occupy our thoughts. “He behaved terribly. Did you see what she was wearing? Thus we show that there are rules, that we know them, and that transgressing them is reason for concern.” But, Gabriels points out, morals are messy. Even classics such as ‘Thou shalt not kill’ depend on context – be it self-defence or wearing a uniform. Our judgements might differ – morally and legally.
“Aus so krummem Holze, als woraus der Mensch gemacht ist, kann nichts ganz Gerades gezimmert werden.” Thus Gabriels quotes rudely named philosopher Immanuel Kant in her Studium Generale lecture on machines and morals. In other words: how can we inconsistent, fallible beings teach a machine to make better moral decisions than we could? “I am teaching data scientists and designers who believe that we can formalise ethics into a decision tree – but it’s extremely difficult.”
Digital technology: a double-edged sword
Twittering presidents, social media-fuelled uprisings, fake news, filter bubbles, personalised healthcare... In this series, UM scientists share their insights into the social consequences of digital technologies.
For whose sake?
While Gabriels advocates a critical questioning attitude, she’s by no means a luddite – on the contrary. “Algorithms, which can process vast amounts of data, already assist doctors with diagnosing breast cancer in some Belgian hospitals. One day this technology will be so accurate that it would be immoral not to use it. Again, medical students today should already be taught to think about these technologies.”
AI also plays a role in the health of a democracy, which brings us to another moral minefield. “The business model of social media is that we pay with personal data, i.e. by being exposed to targeted advertisements. That means capturing our attention through clickbait – the more incendiary and outrageous, the better. Of course, we’ve always fallen for that but now it’s much easier to scale and target.”
Fake news makes fiscal sense
Fake news travels fast – fact checking takes time. Democracy and public discourse seem to have become collateral damage on our watch. “It’s actually quite easy to get out of your filter bubble: just go offline, talk to people, read a serious newspaper that’s leans away from you politically – unfortunately, we’re often too lazy for that. In 2016, almost half of US citizens used Facebook primarily as a news medium – that has real world consequences.”
Unfortunately, there’s no quick fix. “The EU tries to hold tech giants more responsible; in the US, Facebook can now be held legally responsible, so they have financial incentive to beat fake news – but, in deference to their shareholders, to do so in the most cost-effective way possible.” They recently banned right-wing conspiracy-peddlers QAnon – but also accidentally took out a Belgian satirical magazine.
Critical thinking is the best filter
“The magazine complained and Facebook apologised. It shows how much algorithms struggle with context… It’s unlikely that there’s a technological fix for this.” Which brings us back to education. “Academics need to enter public discussion more. They’ve been warning since the 70s and 80s about privacy issues and cybercrimes – but it only reached the mainstream some years ago with Edward Snowdon and then the Cambridge Analytica scandal.”
And so Gabriels tries to build bridges between academic conferences, writing books and op-eds, appearing on stage and on TV. “There are only 24 hours in a day and currently I use most of my free time on all of these outreach activities… academia should reform to acknowledge these activities.” Gabriels is on a working group for UM’s Recognition and Reward vision, which tries to reward not only academic publications but also experts sharing their knowledge about the big societal problems.
Katleen Gabriels is assistant professor of Philosophy at the Faculty of Arts and Social Sciences. She has written two books, Onlife. How digitalization shapes your life and Rules for Robots. Ethics and Artificial Intelligence. On 26 November, she will give a talk on AI and big data at the Dutch Science Gala.
Putting the tech in technocrat
At a recent AI Summit, organised by the international politics and policy news organisation POITICO, Gabriels was on a strategic round table titled ‘Europe’s Digital Legacy and Competitiveness: Make or Break.’ “Researchers are a bit spoiled because in academia we have the luxury of studying things in great depth – but our conferences aren’t attended by journalists and politicians.”
“Some politicians really know what they’re talking about, like Belgium’s then Minister of Digital Agenda Philippe De Backer.” Unfortunately, a very Belgian change of government meant that De Backer ended his POLITICO talk demoted to digital enthusiast. “On the whole, though, very few of them have a science and technology background. They are being advised, which is very good – but it might make them vulnerable to people using the AI hype to sell their products – and might make them underestimate the need for regulation.”
Ethics by design
Gabriels cites the Dutch coronavirus tracing app as a positive example: “Citizens and ethicists were involved from the beginning. It made transparent the difficult trade-offs between individual and collective values, like privacy versus public health. These considerations are impossible to calculate in a value-neutral way. And to have a proper discussion, you need an awareness of how the technology works and how ethics works.”
“In their higher education budget, the Netherlands have allocated more money to Technical Universities, but the Humanities are also crucial for AI. We have to start thinking about moral questions before we design and use those technologies. We need interdisciplinarity and diversity at the design stage of technology but also in policy-making.”
Gabriels at the POLITICO AI Summit