Who Decides What Stays Online? Valentina Golunova on Free Speech and Algorithms
How does the use of algorithms for the detection of illegal or harmful content on online platforms affect the freedom of expression of EU citizens? This burning question occupied Valentina Golunova during her time as a PhD candidate. In 2024, she successfully defended her thesis and obtained her PhD at the Faculty of Law of Maastricht University.

Did she always pursue working in academia? When she was younger, Valentina had other career options on her mind: ‘As a kid I wanted to become a vet or a professional musician, because I like animals and play violin,’ she explains. ‘But I also always have liked learning foreign languages and was interested in politics and the work of the United Nations, so I started a bachelor’s programme in international relations in Saint Petersburg.’ Valentina then soon realised that she wanted to contribute to protecting human rights and international peace in a different way than negotiating. She wanted to work on the legal framework, to structure it better and ensure some justice and oversight that would help maintain peace in the long term.
Valentina decided to switch to law school for her bachelor’s degree. There, Valentina's academic ambition became definite during her participation in the Philip C. Jessup International Law Moot Court Competition. Although still a first-year student, she eagerly took on the challenge of preparing written memorials and delivering oral pleadings. ‘Since I was still a first-year student when I applied for Jessup, I was told to read a book containing 600 pages over the summer, to make it to the team’, she recalls. ‘Somehow this prospect did not really scare me. I actually felt excited about reading that thing over the summer.’ That enthusiasm marked a turning point. ‘That’s when I think I first thought, oh, perhaps it wouldn’t be such a bad idea to pursue a career in academia.’ After that, she moved to the Netherlands to obtain her master’s in International and European law at Tilburg University. Here, she developed a strong interest in law and technology.
From curiosity to PhD thesis
Growing up in Siberia, Valentina was no stranger to the transformative power of the internet. ‘All the information, all the connections with the outside world I had were from online,’ she says. ‘I also always have been curious about technology and how it shapes society. When I was in kindergarten, we had no stationary phone and no computer at home. By the time I went to high school, I had a smartphone and a laptop. All of this happened just within less than ten years.’ Without these impactful developments, especially internet, Valentina says she would be more less confident about the opportunities available to her in life. ‘The internet, and reading other people’s experiences gave me the possibility to broaden my horizons, learn about the possibility of moving and entering a prestigious university in Saint Petersburg’, she explains. ‘But then of course, there’s also a dark side to the internet and all things that come with it. I wanted to unpack it and understand how we can reap the benefits of the internet while mitigating some of the concerns with it.’
For her PhD thesis at Maastricht University, Valentina delved into the issues of content moderation and freedom of expression. Before the COVID pandemic, humans still took the lead in moderating content. Day-in, day-out, people had to look at problematic, harmful posts and decide very fast if they could stay online or should be removed. When the lockdown came, human moderators couldn’t do their job anymore because they were forced to work from home and this led to privacy concerns. From this point, all the content was mostly moderated by algorithms. Above that, the EU began pushing platforms to scale up their use of AI, to make sure the internet remains a safe environment while also protecting the freedom of expression online. But is it even possible to achieve both aims?

Triggered, flagged, wrongfully removed
‘I was curious about the consequences of this shift. After all, algorithms can’t be compared to humans. They don’t have the contextual awareness and empathy that humans have to make the right decisions on deciding what information and comments should be allowed or removed’, Valentina says. ‘One of the consequences is that the freedom of expression online is affected when content is wrongfully removed by algorithms.’ She offers some illustrative examples. ‘There was a case where someone posted a photo of onions on Facebook, and the platform removed it because the algorithm flagged it as nudity - apparently interpreting the onions as exposed buttocks. Or a YouTube chess channel was taken down because the algorithm thought that references to black and white pieces were about racial conflict.’ These may sound amusing, Valentina admits, but the consequences aren’t always so favourable.
Indeed, the stakes are much higher in conflict zones or politically sensitive contexts. Valentina highlights the case of the Syrian civil war, where human rights activists uploaded visual evidence of war crimes to social media platforms, hoping to preserve it for future international criminal proceedings. ‘The content often depicted violence, which triggered automatic removals by the platform’s algorithms. But the purpose wasn’t to glorify violence, it was to document atrocities and hold perpetrators accountable. Those takedowns may have jeopardised crucial evidence.’
But algorithms are not only triggered by visual content. The use of certain forms of language can also lead to removal. ‘That is the problem, for example, for African American dialects,’ Valentina explains. ‘It includes some words that in a normal context would be considered swear words and not acceptable, but then certain communities actually use them extensively in just their speech when they talk to one another online.’ Similarly, political speech is often formulated in an inflammatory or provocative way, which algorithms can misinterpret as incitement to violence or offence. ‘But actually, it is possible to use this terminology and this language to condemn some authoritarian practices and promote civil liberties and democracy,’ she says. ‘But of course, algorithms cannot really distinguish between those purposes.’ Valentina also highlights how women and LGBTQ+ individuals are affected. ‘Women engage in counter-speech when they're confronted with abuse or harassment online. They use some controversial language because they’re frustrated and want to give a strong response to make it stop. And then their content ends up being removed.’ The same happens with LGBTQ+ content, especially images and videos. There are lots of empirical studies on Instagram showing that posts by members of the LGBTQ+ community are removed because algorithms are biased against sexuality and any kind of unconventional ways of presenting bodies.’
DSA difficulties
These risks of wrongful removals also highlight why legislation like the Digital Services Act (DSA) matters. ‘The DSA is an algorithm-neutral regulation,’ Valentina explains. “It doesn't explicitly promote the use of algorithms in the same way as, for example, sector-specific legislation on copyright or terrorism does.’ At the same time, the DSA introduces important mechanisms to protect fundamental rights, including freedom of expression. For the first time, all platforms, regardless of their size, are required to enforce their terms and conditions with due regard to fundamental rights. In theory, this also means that when platforms use algorithms to moderate content, those algorithms should respect these basic rights. However, in practice, the DSA faces several challenges. ‘Many provisions are still open to interpretation by the platforms themselves,’ says Valentina. ‘The initial risk assessment reports published under the DSA were very vague and superficial, offering little insight into how platforms plan to resolve tensions between algorithmic moderation and freedom of expression.’ And enforcement is another concern: while the European Commission oversees the very large platforms, national regulators are responsible for others. In Poland, however, a Digital Services Coordinator has still not been appointed. This shows how difficult it is to make EU-level rules work in practice.
A story still unfolding
Looking at where we are now, it’s clear the landscape has shifted dramatically since Valentina started her research. ‘Five years ago, platforms promised us a new era of algorithmic content moderation,’ she recalls. ‘But today, many of those efforts are being scaled back. They’re dismantling the systems they put in place, partly due to political shifts and growing pressure to prioritise free speech absolutism.’ For example, Facebook has reportedly disabled its disinformation detection algorithms in the U.S. This and other developments signal more widespread backsliding on moderation efforts. Paradoxically, this rollback doesn’t solve the problem. ‘Just removing algorithms won’t help protect freedom of expression,’ Valentina emphasises. Without thoughtful alternatives, we risk ending up in a space with fewer safeguards, not more. What’s needed is a genuine rethinking of the relationship between algorithms and human oversight. That conversation, unfortunately, is still not really happening. ‘So if the content moderation backsliding continues, there won’t be any guardrails anymore, amplifying the chilling effect on the voices of marginalised groups.’ It’s a story still unfolding, and, as Valentina puts it, ‘there’s a lot of work to do.’
Also read
-
ITEM continues: Advancing cross-border cooperation and impact
ITEM enters new phase within UM Faculty of Law from 2025.
-
IGIR seminar series
The IGIR seminar series will be launched after the Summer break. Our aim is to offer a nice and friendly environment for staff members and visiting researchers to present their ongoing research.
-
Inaugural lecture Jan Willem van Prooijen
What drives people to embrace radical conspiracy theories, sometimes with far-reaching consequences for society? During his inaugural lecture on Friday 27 June, Prof. Dr. Jan Willem van Prooijen (radicalisation, extremism, and conspiracy thinking) will address this urgent question.