Social media sanctions – the new procedural justice?

by: in Law
law_social_media_influencers_blog

One view on social media communication is that platforms should remove content deemed to be inappropriate or disturbing and suspend users who have repeatedly violated the Community Guidelines and should do so in a consistent and coherent manner. A contrasting view is that users can share what they want, and platforms should solely act as transmission agents without assessing and restricting the content. But where exactly do platforms stand on content moderation and what do their sanctions reveal about it?

The Importance of Community Guidelines
First, to understand why a particular behaviour on social media has to be sanctioned at all and on which grounds it might be helpful to take a step back and look at the importance of community guidelines and content moderation in general. But what are community guidelines, and how does content moderation look like? Broadly speaking, community guidelines are the regulations set by the individual social media outlets to manage the content and behaviour their users display on them. Every major social media, such as Twitter, YouTubeTikTok, and Instagram, has them. This is especially vital, having in mind new and potentially threatening online phenomena such as trolling, fake news, online scams and hate speech. Due to an increasing societal call for action regarding social media outlets and their responsibility towards both their users and society as a whole, many social media outlets have to upgrade their community guidelines constantly to adapt them to current needs. A prominent example for such recent upgrades is, of course, Twitter’s Election Integrity Policy, which was enacted before the 2020 US federal election, in order to prevent the spread of misinformation aimed at distorting the democratic process. Nevertheless, the ongoing COVID-19 pandemic has also boosted the need for adapted community guidelines since conspiracy theorists heavily rely on social media to spread potentially dangerous misinformation on the disease and the current vaccinations. YouTube, for example, integrated an entirely new section on COVID-19 information into its community guidelines.

Comparing the different social media outlets’ community guidelines, one can see that they tend to categorise different types of prohibited online behaviour in a similar structure. Generally, the standard categories, almost universally present, include spam and deceptive practices, sensitive content, violent or dangerous content, and regulated goods and illegal activities.

Still, the question is legitimate, why do we need community guidelines in the first place? Use your imagination and think of social media platforms as actual communities of people (the users). The community guidelines serve as the law to allow a peaceful and fruitful cohabitation. In a system based on a functioning symbiosis between coherent community guidelines and effective content moderation, every party benefits. Social Media outlets are increasingly forced to implement good content moderation policies in order to keep their reputation. Just think of the backlash on Facebook’s lash content moderation during the 2016 US presidential election and how the company was repeatedly criticised for allowing the spread of falsehoods. Also, businesses should try to look out for proper content moderation on their own online presences on various social media outlets. Otherwise, a company’s reputation could suffer as well, for example, if people only find negative or even disturbing comments under certain posts. Lastly, and arguably to the most considerable extent, the user profits from such measures by having the possibility to navigate through a friendlier and more secure online environment.

Different Sanctions on Social Media
As mentioned above, social networks rely on Terms of Service and Community Guidelines to maintain a safe and open environment. To ensure compliance with their terms and maintain social order, platforms have equipped themselves with certain coercive instruments. These allow platforms to actively interfere in interactions between users on the platforms for preventive or punitive purposes. They can either restrict, amend or remove content for a certain period or permanently to regulate behaviours. What constitutes a desirable or undesirable behaviour is, however, left to them to decide. In general, platforms will seek to foster a sense of respect, safety and trust, but there is no common approach to achieve this objective, and punitive measures may vary from one platform to another. This is because social network platforms offer fundamentally different services and have divergent responsibilities. The enforcement actions will vary accordingly. 

In its Terms of Use, Instagram states that enforcement actions may be adopted whenever the content violates the Terms of Use and Community Guidelines or if, when assessing the content, the platform considers that it represents either a risk for the community and services or could result in a legal exposure for the company. Over the last couple of years, Instagram’s unclear and inconsistently enforced policies have been the subject of growing frustration, and it is not without reasons. The platform’s general enforcement policies are somewhat ambiguous and make it difficult to determine the spectrum of sanctions that can be applied in case of non-compliance. From the information available on its website, Instagram can either remove or block content or information shared on its platform or refuse to provide the service to the user in parts or in whole. Instagram being a subsidiary of Facebook Inc, the decision to terminate or disable the user’s access will be expanded to all Facebook services and companies.

According to its recently updated terms, Instagram can also disable accounts with a certain percentage of violating content or remove accounts that accumulate a number of violations within a window of time. However, it is unclear how Instagram proceeds to assess a violation and what factors it takes into consideration to decide which enforcement measure is most appropriate. These numerous complaints have compelled Instagram to introduce a new notification process through which it assists users and helps them understand if their account is at risk of being disabled. In some cases where the platform removes content, it will also notify the user and explain the procedure for requesting a review. Such a request will not be permitted if the user has seriously or repeatedly breached these terms or if it would expose Instagram or a third party to legal liability, harm the community or compromise the integrity or functioning of its services, systems or products. 

TikTok, on the other hand, has far more innovative and intelligible enforcement mechanisms. To ensure compliance with their terms and maintain social order, the platform can first remove the content if it violates the Community Guidelines. For some content that could be considered upsetting or feature shocking substance, TikTok may reduce visibility or discoverability, including by redirecting search results or limiting distribution. Individuals concerned will be notified of the decisions and may appeal if they believe no violation has occurred. It can also suspend or ban accounts that are involved in severe or repeated violations and report the accounts to relevant legal authorities when warranted. Similarly to Instagram, in October last year, TikTok announced that it would also put into practice a new notification system to provide users more clarity on content removals and subsequently enhance transparency and reduce misunderstandings about content allowed on the platform.

In contrast with other platforms, Twitch provides a clear and straightforward conspectus of its enforcement actions, but in essence, the mechanisms are similar to other social networks. The live streaming platform can issue enforcement actions for any violations of Terms of Services and Community Guidelines, which are regularly updated based on the evolution of the Twitch community and service. These guidelines include the respect for all applicable national law, the prohibition of acts and threats of violence and hateful conduct and the prohibition of sexual content and self-destructive behaviour. A number of factors will be considered to assess violations including the intent and context, the potential harm to the community and legal obligations. Depending on the nature and seriousness of the breach, Twitch may take different enforcement measures ranging from a warning to a temporary suspension and an indefinite suspension for more serious offences.

A warning is a courtesy notice which will be issued for minor violations. In certain instances, Twitch may also remove content associated with the violation and implement a probationary period during which the activity of the user concerned will be monitored to ensure no further offence is committed. Repeating a violation, the user has already been warned for or committing a similar violation can result in a suspension. A suspension implies that the user will no longer have access nor can use Twitch services including watching streams, broadcasting or using the chat function. Compared to other social networks, Twitch enforces two forms of suspension, respectively a temporary suspension and an indefinite suspension. The first ranges from one to 30 days. Once the suspension is complete, the user will be able to use the services once again, but a record of the violations will be kept. Multiple temporary suspensions can lead to an indefinite suspension. For the most serious offences, Twitch can also immediately and indefinitely suspend an account with no opportunity to appeal. Although radical, bans are not uncommon on the platform.

A recent and rather controversial case is one of the gaming streamer Beahm. Also known under the online alias of Dr. Disrespect, Beahm was first temporarily suspended from Twitch in 2019 after he entered public bathrooms during a livestream at the Electronic Entertainment Expo event, which violated Twitch’s privacy rules and Californian privacy laws. In June 2020, Beahm was permanently banned for reasons that remain speculative. Unlike any other social networks, however, Twitch may take measures for abuses which occur on other platforms. Any hateful or harassing conduct directed towards Twitch users can contribute to a suspension on the streaming site.  

Criticism on the Measures Taken
Naturally, sanctioning social media users always leads to an intensive debate and criticism from every side of the aisle. The most striking criticism nowadays is, of course, that sanctioning social media users, for example, by deplatforming them, might infringe upon their freedom of speech. This is especially controversial regarding the sanctioning of political world leaders, such as Donald Trump or Jair Bolsonaro. Although the topic is fascinating and subject to rapid development, this blog post will not go further into detail here, as our blog already discussed the subject in detail.

Similarly to the sanctioning of political leaders, also ordinary users who represent extreme political views, for example, conspiracy theories or far-right ideologies, are repeatedly sanctioned for violating the community guidelines. Ken Jebsen, for instance, a former German television host turned conspiracy theorist, was recently permanently banned from YouTube for spreading conspiracy theories and falsehoods around the COVID-19 pandemic, leading to a public outcry from the far-right. Although these bans are often wanted by most users in general and are a logical consequence for repeatedly uploading controversial content, some researchers argue that banning people with extreme political views only furthers their radicalisation, as they are not able to enter into proper discussions, thus receiving no objections to their opinions.

In general, as certain influencers and streamers have a large fan basis, a public backlash for sanctioning them is only a natural consequence. Especially on Twitch, where streamers get banned temporarily quite frequently, such a backlash is more than typical. There even exists a Twitter profile that automatically updates its followers on when a streamer gets banned. Of course, there are, however, more subtle sanctions than an ordinary social media ban, such as shadow-banning on Instagram, around which there is often a lot of confusion and also increasingly more criticism.

Another criticism of the sanctions imposed by the platforms is the lack of legitimacy. Social networks have complete control over their platforms and can, if the content is deemed to be inappropriate or to contain hate speech or nudity, simply remove the content or restrict access to their sites which consequently restricts the right to free speech enshrined in most constitutional instruments and which entitle individuals to free expression without intrusion. These competences are similar to those of state authorities to assess a violation and impose the appropriate consequence. While the latest events have demonstrated that many platforms have accepted the opinion that they should take responsibility for content and better monitor content and impose sanctions when the users fail to comply with the Community Guidelines, some question whether private companies may lack legitimacy for engaging in rulemaking and enforcement and whether there should be a limit to the discretion of theses private companies when adopting enforcing actions. 

Most legal systems control their citizens in the sense of exercising powers and maintaining law and order and trace these legitimate powers to rule in their sovereignty. They are structured around a set of affirmative rights and duties, which often require a high level of scrutiny. Moreover, in a conflict between rights or interests, each is balanced so that the enjoyment of one right does not result in the deprivation of another. In a legal setting, this balance is defined by laws or by the interpretation of these laws by judges and may, in case of dissent, be subject to appeal. Yet this is not the case for social media platforms. The power of these platforms merely stems from the control they exercise over user accounts and content and the rules and sanctions are not adopted according to a debate between elected representatives who serve the interest of their constituents but defined through the standards and policies that social media companies create. Platforms can thus not claim democratic legitimacy and should not be able to enforce coercive actions that may restrict the enjoyment of fundamental freedoms. Nevertheless, while the application of notions such as legitimacy to social media platforms may seem like an evidence, private entities are not state authorities and social media users are not citizens.

It is clear that the various social media outlets’ different paths are not all strictly coherent and at times, misleading if not even potentially dangerous. However, likewise, it holds also true that, until now, no utopian path was found that would be above reproach. We have seen that one of the reasons for this is that every social media outlet has its own specific focus that it has to deal with, as some rather rely on video and others on photo content. Nevertheless, this should be no discouragement to all social media outlets to live up to their responsibilities and strive for the safest online environment they can provide to their users – even if it is fair to predict that the criticism will not disappear entirely.

 Written by Lucas Hieronimus and Florian Bachmann - more blogs on Law Blogs Maastricht