Minors and social media – how are the most vulnerable protected?

by: in Law
law_blog_minors_and_social_media_influencers

Could you estimate how much time of your day you spend on social media? The answer would most likely be something along the lines of ‘a lot’ or ‘I’m always connected, so I get notifications all the time anyways’ but an actual estimate, that’s a very tough guess to take. However, for most people, it is indeed a very high amount of time. Given that we spend an incredible amount of time on various social media platforms, it cannot be denied that the content we are exposed to for so many hours in a given day has an incredible impact on us. While, for a lot of people the good might outweigh the bad, others are prone to be severely damaged by the use and content on their social media accounts.

Particularly, teenagers’ mental health can suffer due to the use of social media, most likely because of the stage of development of a young person’s brain and its vulnerability. Because nowadays, 95% of teenagers have a smartphone, they are thereby almost continuously connected and kept up to date; it is nearly impossible to differentiate between time spent on- and offline. Research suggests that the use of social media is correlated with depression and other mental health issues discerned in teenagers. Unfortunately, just how severely social media can influence a young mind often only comes to light once it is already too late. One fatal example which sparked a considerable public debate in the United Kingdom was the suicide of 14-year-old Molly Russell. After she took her own life, her grieving family turned to the public, stating that they believe that Instagram played a role in the tragic turn her life took. Her father advocated for a change of the content that can be viewed online and plead for the platforms’ responsibility to protect its users in situations similar to Molly’s. His advocacy also resulted in the United Kingdom’s government applying pressure to various social media companies to remove harmful content. In this regard, health secretary Matt Hanock indicated that a failure on behalf of the platforms to do so could mean a stricter regulation of platforms in the United Kingdom. The response of Instagram to the public outcry concerning this topic was to remove a large variety of content that could be connected to self-harm and to broaden the scope of the type of material to hide from its users, in the hopes of offering better protection.

It is widely accepted that minors, in particular, have to be safeguarded from certain content that circulates online. How this protection of children through community guidelines on various social media platforms is achieved will be outlined in the following sections.  

Generally, what most, if not all, social media outlets have in common is that they specifically focus on the protection of minors. However, here we will only discuss Instagram, YouTube and TikTok, as these outlets arguably represent the essential creative content providers by underage users.

Sadly, although it comes as no surprise, that the various phenomena harming children in any way are not emerging only now. For these reasons, social media and video-sharing platforms have, throughout the years, developed policies to protect children. As a first step, it is fair to say that all these platforms follow a similar structure regarding the protection of minors through their community guidelines, albeit, of course, adapting the rules to particular platform-specific needs. 

Instagram follows the community guidelines set by its parent company Facebook and has scattered minor specific requirements alongside it to offer additional protection. The platform also implemented a ‘Tips for Parents’ section within its Privacy and Safety Center. Likewise, YouTube derives its community guidelines from parent company Google, which principally follows Instagram´s structure, while setting a few additional requirements relating to video content in particular. Similar to Instagram, YouTube has implemented a ‘Child Safety Policy’. Those policies or tips aim at protecting children from any content that could potentially be harmful (both emotionally or physically) for viewers, especially minors. As to protect harmful content from being posted within their platforms, both Instagram and YouTube will prohibit certains ‘kinds’ of content. YouTube, for example, will prohibit the following content from being posted on their platform: 

  •  Sexualization of minors
  •  Containing harmful or dangerous acts involving minors
  •  May inflict emotional distress on minors
  •  Could be perceived as misleading family content 
  •  Cyberbullying and harassment involving minors

Additionally, YouTube can categorize its content to a certain age group, as such, excluding minors from viewing content that might be considered harmful for them – while still allowing older viewers to view it. To enact such viewing, an individual would have to sign in to their Youtube account as to prove they are 18 and older. To help discern which content may be harmful, YouTube uses an automated system that aims to detect content that may violate the platform’s policies. Similarly, Instagram imposed prohibitions over several sections: Child and sexual exploitation and nudity; bullying and harassment and human exploitation.

Turning to TikTok, although one might argue that the platforms’ protection measures lead to the same practical results as those of the other outlets, TikTok retains a unique role in protecting minors. Of course, this is due to the constant criticism that TikTok has to endure regarding this subject matter. To provide yet another fatal example; at the moment, one can see that Italy is actively trying to regulate and restrict TikTok, due to the tragic death of a ten-year-old Italian girl, which was allegedly caused by accident in the wake of a viral TikTok challenge. For these reasons, TikTok pays enormous respect to the protection of its underage users.

But let us take a step back and look more generally at the content restrictions aimed at protecting minors on those websites. Naturally, one of the most pressing issues and the specific limitation is that no content is allowed that, in any way, sexually exploits or endangers children. If such content is detected on any of these websites and is deemed potentially criminal (e.g. child pornography), such content is immediately deleted. The case is then referred to the National Center for Missing and Exploited Children (NCMEC). What might be interesting to note here is that content in violation of these policies does not even necessarily have to be uploaded with evil intent. For example, Facebook/Instagram explicitly urges parents not to upload content showing their nude children, for instance, in the bathtub, as these pictures might be reused or misappropriated by third parties.

Also, understandably, the policies do not only explicitly encompass nudity or sexual content involving minors but any content in which a minor is shown within a sexual context. Such situations would, for example, include photos of children in the presence of aroused adults, children in sexualised costumes, or children in a sexual fetish context. Another recent and concerning trend on YouTube, for example, are seemingly harmless videos, which are edited in a certain way to ponder various fetishes (we will not refer to accounts/content to avoid giving such content a stage). It is difficult for content moderators to deal with such content, as some might argue, that the videos themselves would not depict anything in violation of minor protection policies. Future developments in content moderation will show how such issues that will probably only increase will be addressed.

A problem relating to the sexualization of minors that has proven to be a challenging issue for TikTok, in particular, is “grooming.” In its community guidelines, TikTok describes grooming as a situation in which “an adult builds an emotional relationship with a minor to gain the minor´s trust for the purposes of future or ongoing sexual contact, sexual abuse, trafficking, or other exploitation.” To counteract this alarming problem, TikTok updated its guidelines this year and restricted underage users’ access to specific functions such as direct messaging or hosting a live stream. In addition to these measures, TikTok bans every user who has already been convicted of crimes against children.

Further, content that is generally restricted on all social media outlets relates, for example, to online bullying and harassment. Precisely for this purpose, Instagram has introduced special protection measures for users between the ages of 13 and 18. Besides, especially social media outlets revolving around video content, such as TikTok or YouTube, also ban every content that depicts dangerous acts of minors or aims at encouraging children to do something, such as challenges or dares.

The importance of protecting minors has also generated attention at the EU level, where parental consent is required for actions taken by any child below the age of 13 years old. While the GDPR sets the general age of digital consent to 16 years of age, it also allows Member States to set their digital consent between the ages of 16 and 13 years old. To that effect, children wishing to join a social media platform before the age of 13 years old shall require parental consent to do so. This, however, does not include the possibility for social media platforms to restrict the creation of an account for children below the age of 13. Platforms such as YouTube, for example, requires all users to be at least 13 years old. This is due to the fact that no data collection can occur from a user who is younger. 

Therefore, one may wonder, whether there are any possibilities for a 10-year-old to have a YouTube account? If there is parental consent, then yes, a 10-year-old can create a YouTube account. However, users under the age of 13 should only use YouTube Kids which is not only an easy platform to navigate through but also a place where content is greatly moderated for the viewers’ safety.. 

Beyond the legal protections, several initiatives and projects have been set up by the European Commission in the last decades in order to offer children with the necessary tools to safely and responsibly use the Internet. The European Strategy for a Better Internet for Children which is coordinated by INHOPE and Insafe is one such example. The main tasks of this strategy can be categorised into four pillars:

  • To increase the amount of existing child-friendly content on the Internet
  • To raise awareness of digital literacy and online safety
  • To create a safe and protected environment online for children
  • To restrain sexual exploitation of children and other abusive content involving children.

On the one hand, INHOPE was funded by the Commission and aims at supporting “the network of hotlines in combating online Child Sexual Abuse Material (CSAM).” Bringing an emphasis to the aforementioned fourth pillar. INHOPE has since its creation in 1999 developed a strong global network and has to this day, 47 hotlines.  

On the other, Insafe aims at promoting global awareness regarding safe internet use for all. And it wishes to empower users such as children with the necessary knowledge and skills to “stay safe online.”

As a response to the increased use of the Internet by minors, those associations have also launched additional initiatives and projects. Every year, they celebrate Safer Internet Day (SID), which aims at promoting “a safer and more responsible use of online technology by children and young people across the world.” 

To conclude from what has been discussed, while both social media and video-sharing platforms have restrictions and policies in order to protect children, the extent of that protection can be contested. Various content on platforms such as YouTube and Instagram involving minors can be perceived in certain settings as sexual. One controversial example which was previously mentioned manifests a mother with her legs spread, peeling a cucumber while being next to her toddler. One may believe that this video is cute as the mother is spending time with her child. However, one might also perceive the content as sexual or sensual. This content is only one of the countless existing content found on social media platforms, which begs the question, ‘have social media platforms done enough to effectively protect children from harmful content?’.

 Written by Luca Teres Loytved, Florian Bachmann & Marie Cochet 

​​​​​​ More blogs on Law Blogs Maastricht