AI's moral architects: neither demi-gods nor code monkeys

Who’s to blame if AI goes wrong? And who’s responsible for it not having a negative impact in the first place? In her PhD thesis, "A Showing of Hands: Making Visible the Ethical Agency of AI Developers", Tricia Griffin looks at the people behind the technology and the clichés. She argues that we should treat them as professionals who are making moral decisions with their code.

So, once the machines rule over us with a literal iron fist, whose fault will it have been? “Firstly, I would go easy on the hype. The fully sentient robot apocalypse is not nigh. We would all do well to remember that there is a big financial incentive for tech CEOs to talk about Artificial General Intelligence like it’s around the corner – if only they had another round of funding,” explains Tricia Griffin, fresh off defending her PhD thesis on the ethical agency of AI developers. “But engineers who have been working in robotics for a long time agree we’re nowhere near close, and when it comes to whose fault it all is, we need to develop a more sophisticated view of things.”
 

Responsibility and accountability in AI

Griffin distinguishes those responsible (i.e. legally liable and answerable to democratic oversight) as a smaller subset of those who need to be accountable. For a number of practical reasons, she agrees that legal responsibility ought to sit with the deploying companies or CEOs. However, Griffin argues that since AI developers are moral agents, they should be expected to account for the choices they make.

“We should not be too cynical here,” says Griffin who has interviewed more than 40 high-level AI developers for her PhD. “We have to distinguish between the cliché of a group of powerful, arrogant CEOs, from those doing the actual programming. While the latter do compartmentalise, in my experience, they realise that they have ethical agency and they face dilemmas around that.”

Tricia Griffin

Tricia Griffin is a lecturer in Global Health, Biomedical Ethics, AI Ethics at the Faculty of Health, Medicine & Life Sciences. She completed her PhD in the Faculty of Science and Engineering. Learn more and watch her defence.

Do a single developer’s morals matter?

Here we run up against the problem of many hands, or the difficulty in placing blame in settings where multiple individuals contribute to an outcome. This is inevitably the case in AI development, where potentially hundreds of practitioners contribute modules to a product. That is not to absolve the individual: “It is always individual developers who execute the work. However, there is no real accountability for the choices they make, and since society doesn’t take them seriously as moral actors, they don’t have to take themselves seriously either.” 

Since the project of AI and automation is predicated on surveillance and massive data gathering, developers are in an inherently manipulative relationship with the public, she argues. And with the amount of money in play, the industry is full of perverse incentives that encourages manipulation in the service of maximizing profit. “It’s in companies’ interest to treat developers as ‘code monkeys’, replaceable cogs that do the job without thinking about the bigger picture or asking ethical questions that might eat into profit margins or the race to gather data.” 

Every coder makes difficult ethics choices

Her research left Griffin not only full of admiration for developers’ abilities but also with sympathy for the difficult dilemmas they have to face, often on their own and ill-prepared. “Developers can either quit or resort to malicious compliance, essentially doing a bad job or claiming an unethical request isn’t technically feasible. Of course, they can also become whistleblowers but that often means forfeiting any future career and with that a lot of money and prestige.”

Griffin recalls the example of a developer who had to balance fairness and safety and tried to aim for a compromise without understanding the implications of the trade-off. “Many of the practitioners I interviewed were aware of the ethical dimensions of what they were being asked to do but didn’t feel they had the tools, or access to a strong community of support, to make good decisions. I think academia has to do a better job here; ethics courses are often tacked on to curricula and taught in too theoretical a fashion. At the same time, we need work environments that encourage ethics conversations rather than keeping developers in the dark.”

Is the public defenceless against AI companies?

This raises the bigger societal question about why AI developers don’t fall into the same category as heavily regulated and codified professions such as lawyers and physicians, in that their clients – an increasingly defenceless public – are entirely dependent on their obscure craft. “Developers should be aware of how vulnerable the people whose data they are harvesting are. And how vulnerable the people are who their systems are deployed on. It’s a grave imbalance of power.”

Griffin cites the child benefit scandal that brought down Mark Rutte’s last government in the Netherlands. “You could have your kid taken away because a judge relied on what an algorithm said instead of the actual evidence. But someone programmed that algorithm and set the parameters such that single mothers and people with foreign sounding names were deemed more likely to be fraudsters. And we never even considered including the developers in our conversations about this scandal.”

Are moral developers powerless against the system?

While in high demand, AI developers as a group have little political power. Talking about unionising feels almost frivolous for a profession that already enjoys the traditional fruits of collective bargaining: excellent wages, benefits, and working conditions. The ethical implications of labour are a new frontier here. “In the absence of political organisation and collective bargaining, it becomes ethics shopping for companies – in the end, pretty much everyone has a price – and the plausible deniability of only having contributed a small piece of the system makes it even easier to just go along.”

Once AI developers understand themselves as moral agents in a sense that transcends their role at work, they still need a community of developers who can aid each other. “At conferences, when developers present their models, it’s all about technical scrutiny – the ethical aspect is not at all something prestigious in the profession. Developers are using their technical imagination to build these systems; they can also use their ethical imagination to consider potential nefarious consequences of their models and how they can be mitigated.”

How to get good at moral coding?

Griffin also thinks we should get students of computing reflecting and growing together when it comes to practical ethics. “Developers need to understand that ethics isn’t just common sense. To become a virtuoso at teasing right and wrong apart, you need to put in the work, like with everything else. If you decide you want to specialise in fairness, you quickly learn that there are half a dozen definitions that can’t be mathematically satisfied at the same time. Learning how to be an expert in fairness requires a community who cares about getting this right.”

Overall, Griffin does see some positive changes happening. “With AI now ubiquitous, people are becoming more aware of issues around privacy, and they are starting to explore ways to push back. I think you can be very enthusiastic about this technology and still be against the gross, dehumanising, capitalistic extraction of our data. But it will take a movement, and AI developers have to be a big part of that.”

Text: Florian Raith

Also read