AI-AI Captain
AI will change the world – possibly in a very direct way. UNU-MERIT's Michal Natorski thinks the technology could inadvertently influence the decision-making processes of UN agencies.
“I understood that it’s a completely new area of international relations that isn’t researched at all,” remembers Michal Natorski, who had spent years studying international organisations’ handling of crises such as the Arab Spring and the Ukraine invasion. Starting from an interest in uncertainty and urgency in international relations, he went on to studying AI, in particular how it might be able to prevent or manage crises.
Natorski is looking into the interaction of regulations and usage of AI in international organisations, primarily the UN. “It’s about laws influencing the development but mostly about technology’s potential for changing policies.” His example is as mundane as it is plausible. “The UN runs on Microsoft. You already have AI plugins that compose responses to emails. We already have the technology for a scenario in which inboxes communicate with each other without human interference.”
Fully automatised policy-making
Similarly, given a large enough data set, AI can write speeches perfectly suited to the style and convictions of any given politician – but they will always be reiterations of the same. AI detects patterns and generates the most typical speech on any given topic. While this is an excellent tool for communication, Natorski is afraid that it will be difficult when it comes to shifting policies, to solving problems caused by the status quo. “You don’t have data for the change that is happening now. If we rely too much on AI, we won’t be able to navigate novel scenarios. It might steer organisations to be less flexible and innovative.”
While fascinated by the technology, Natorski is not a disciple. “The potential is incredible, but there are clear limits. Israel has the most sophisticated technology in the world for predicting risk and they failed to predict the 7 October terrorist attacks.” Worse, unwise deployment might create new risk. “The UNDP [United Nations Development Programme] ran a project using AI to detect the destruction of Ukrainian buildings during the Russian war. It was a fantastic idea, but all the information was open access and thus a great source of information for the Russian military to see how accurate their missiles were. It also turned out to be illegal in Ukraine due to the war and was eventually discontinued.”
From reality to code and back
All of these are matters of implementation, but AI also has inherent limitations. “The UN is turning to computer and data scientists who are experts in converting input into data, for example converting social media posts into numbers” Natorski warns that it’s important not to anthropomorphise AI and instead to remember that it doesn’t have an idea of reality; it can only process data sets. “Take facial recognition: you have to understand that it’s not your face, in the sense that we think about it, which AI processes, but your face’s unique features converted into code.”
You can criticise the efficacy of datafication: can one really quantify a sizeable group’s simmering resentment the expression of which they feel would be socially sanctioned? At the same time, while no one bats an eyelid at sharing all their tagged and easily sortable thoughts and preferences with multinational corporations – imagine the Stasi staffed by gormless volunteers – not everyone is a data-gathering machine. “In the Philippines, the UNDP employed AI to estimate poverty levels by using satellite images, geospatial data and Facebook Marketing data to break down poverty levels by triangulating this information. However, it was impossible to detect the social status of those without smartphones, internet connections and Facebook accounts. The justification was cost-efficiency but many people fell through the net.” Again, AI can only achieve desirable results if enough of the relevant reality is converted into data.
Geopolitically relevant
There are many other considerations around AI, such as its vast energy consumption and potential conflict around the raw materials required to produce semiconductors. Its growing geopolitical importance is evidenced by the so-called “chip war”, which has seen China prohibited from purchasing the most sophisticated technologies for microchip production and the US and Europe build more internal capacity for production, which had hitherto been outsourced, primarily to Asia.
Natorski is very sceptical about the promises of an AI utopia, primarily because we’ve been there several decades ago. “Initially, the internet was hailed as an agora, a breakthrough for democracy. Now, very powerful actors use it as a tool for autocracy.” This example, he says, has served as a warning for policy–makers, who have caught on early to the inherent ambivalence of AI as a tool. “It’s like electricity, you can use it for missiles or reading lamps – but it’s the same general-purpose technology. Because of what happened with the internet, politicians quickly realised that they had to act.”
Urgent need for regulation
“In the worst case, we are all doomed, as many leading American companies try to convince us, saying that they need to self-regulate generative AI. They try to shift the attention from the current everyday use of AI in our lives to potential security threats. But I’m more interested in a scenario where we manage to regulate before deploying technology – or at least not too long after to prevent the worst case.” AI, however, is a protean beast, and any regulation necessarily is running after the fact. “The EU’s AI Act is a first attempt at serious, comprehensive legislation imposing sanctions on companies breaching the rules. Note though, that most of it was negotiated before Open AI’s Chat GPT captured all the headlines. They had to make a lot of adaptations later on in the legislative process to cover the technical developments.”
Overall, the technology evolves faster than our ability to think through its possible ramifications. “The technology is close to mature, but it still malfunctions. It also relies on using copyrighted content without any acknowledgement. 2023 was the year of Open AI; 2024 might be the year of AI companies being sued for using other people’s intellectual copyright to train their models. Right now, there’s basically no regulation.” Natorski doesn’t think there will be a binding global AI treaty; instead “we might have regional guidelines like the EU AI Act. If we do get global regulations, they will be for specific realms like health or self-driving cars, which will require much more specific rules for standardisation …”
Relevant and realistic
Natorski qualifies that, in 2021, UNESCO adopted their Recommendation on the Ethics of Artificial Intelligence, a non-binding set of values and principles to guide future regulation. However, one can’t help but notice that points such as ‘do no harm’, privacy, and a flourishing ecosystem sit uncomfortably with to an incredibly energy-intense technology whose affordances promise a golden age of automatised mass-surveillance and drone warfare.
The UN is currently developing or using more than 200 AI tools. Researching their success rate is tricky. Would events have played out the same without preventative action based on the prediction? There’s also the unsettling point that military interventions to prevent a predicted bigger conflict, essentially means going to war at the behest of AI. “We are experimenting to understand the limits of this technology. Some tools are dropped, but others become a part of policymaking, like the Food and Agricultural Organization Hunger Map or the automatised geospatial detection of infrastructure destruction after floods or earthquakes as used by UN agencies.”
“Solutions need to fit the culture, and unfortunately, any biases we have in our data are our analogue biases multiplied by digital tools,” Natorski says. “AI tools need to be relevant for people. At the same time, you need a realistic understanding of what the technology can do and how it does it. But we don’t have this understanding precisely due to the nature of AI. It will never be perfect but we really need to get this right, so we need to be able to scrutinise it.”
Text: Florian Raith
Also read
-
Open Science proposes openness about data, sources and methodology to make research more efficient and sustainable as well as bringing science into the public. UM has a thriving Open Science community. Dennie Hebels and Rianne Fijten talk about progress, the Open Science Festival and what researchers can do.
-
If you ever find yourself detained by the police—even when innocent—get a lawyer and keep quiet. This is the most important lesson Jenny Schell-Leugers passes on to her students. Don’t make the mistake of thinking, “I’ve nothing to hide and can explain myself,” the legal psychologist says. Experience shows that anybody can fall victim to a miscarriage of justice. With a new database, she is trying to chart the extent of this problem in Europe. “This is just the beginning.”
-
Should AI be allowed to manipulate us on a daily basis? Should it be trained on people’s data without their knowledge or consent? How can we enforce laws concerning AI, privacy and competition? In RegTech4AI, Konrad Kollnig brings together AI and the law to answer these and other questions.