Social Media and Mental Health: Is Intervention Viable?

It’s widely accepted that social media can have negative impacts on the mental health of its users, particularly children and teenagers. In a development in tackling this serious issue, the Royal College of Psychiatrists has called on social media platforms for more intervention. These would take the form of the platform contacting the parents of children who have been accessing self-harm, suicide, or eating disorder related material. The request raises questions on its potential efficacy and where the responsibility lies regarding the safeguarding of children and teenagers online.

Contacting a vulnerable user’s family is a practice that should be adopted by all social media platforms. As companies delivering a service the wellbeing and safeguarding of users should be a priority. In many cases, intervention might serve as the last option in ensuring the safety of a user. Many parents who have lost their children to social media-related mental health issues commonly report not knowing the existence or extent of their struggles. The act of intervening and involving the family of an affected individual could bridge the gap between the online world and the real world. For many, the reminder of a loving and supportive world separate from their phones and online profiles could be lifesaving.

However, intervention isn’t the sole solution to the issue of social media harm. It wouldn’t reduce the number of users whose social media interactions affect their mental health, and users may be wary to provide family contact details. The focus should be shifted to the sources of these issues in order to prevent intervention from becoming a necessity.

The prevalence of material related to self-harm, suicide, and eating disorders makes dealing with them difficult. Searching on Twitter using certain keywords relating to eating disorders revealed a myriad of accounts glorifying unhealthy weight goals, with some accounts having upwards of one thousand followers.

Could targeting content that leads to mental health concerns be effective? For example, banning misleading and unhealthy weight loss adverts promoted by influencers such as Katie Price on Instagram has been done before. It can be argued that these posts act as gateways to more harmful content. Censorship of influencer promotions could aid in disrupting the glamorisation of unrealistic standards and unhealthy lifestyle choices.

However, the notion that such content should be avoided could have the reverse intended effect. Exposure to potentially harmful content might be a necessary evil, as it could act as a tool for education. The normalisation of content allows for education, exposure, and discussion about it. If users are able to make their own informed decision on the content they consume, they could be less likely to venture down the slippery slope of harmful content.

Knowing that certain content is prohibited might entice younger users to actively seek it out. When that content is found the user is in a vulnerable and impressionable position. Moreover, they would be unlikely to talk to friends and about the content they see if it is not normalised or is banned. This leads to isolation as the user’s interest deepens in the content until inevitably they are exposed to something harmful. Content of this nature should be regulated to not be misleading but should remain in conjunction with more discussion about their dangers and realities.

Regardless of which methods are effective, the responsibility falls on everyone for maintaining a safe environment online. Social media platforms should intervene with and regulate the content they host. Governments should legislate policies aimed at educating young people about mental health and online material. Individuals should report harmful content and reach out to their friends and loved ones who may be struggling.

In the UK and Ireland, Samaritans can be contacted on 116 123 or by email at jo@samaritans.org or jo@samaritans.ie.

Rayan Striebel

Image: Public Domain Pictures.