Big Brother on the Internet? Watchdogs and Social Media

The Department for Digital, Culture, Media and Sport has published a white paper advocating for an independent internet watchdog to keep the giants of social media in check, as well as giving oversight to the wider content of the internet. Companies such as Facebook, Twitter and YouTube have agreed that more must be done to protect users; limit the impact of harmful content, and ultimately remove that content completely.

But this raises an issue in itself: who decides the definition of harmful? Facebook’s system at the moment involves paying contracted companies to employ workers that manually review reported content. A length report from The Verge detailed the awful impact this has on those individuals. Many develop PTSD and become unable to normally function in or out of work. Imagine if, day after day, your time was spent viewing material ranging from edgy jokes that had caused a few people offence, to suddenly reviewing images and videos of brutal murders, terrorist decapitations and child abuse. According to the report, this is the reality for 15,000 content reviewers around the world. The goal of a regulatory watchdog is a noble one, yet the stark reality of how the process currently works is another matter entirely, if we consider this current method of content control.

 There seem to be two possible options. 

First, thousands more workers could be employed to sift through these sites according to strict legislation. Yet, with the rate of content growth forever rising, it seems like fighting while retreating. If Facebook and the like had an easy fix for their content issues, they would have most likely employed it by now. It is simply not realistic to expect people to try to keep up with the internet even now, and who’s to say what it would look like in another decade’s time if nothing radical was done.

On the flipside of the solution, it is left up to these employees to judge whether content falls into the illegal categories, and left up to them what action to take. The government report itself acknowledges that there are ‘harms with a less clear definition’, such as disinformation, violent content, extremist content and even trolling. The individual bias of humans cannot be totally removed, and if judges and lawmakers can’t always agree on how to exactly define these categories, then how can we expect the employees, who currently get only four weeks of training, to make those decisions? All the independent watchdog would do, if it was based around human labour, would be to add another level of scrutiny to the process, but ultimately not solve the core problem.

This seems to leave only the second option: a complex algorithm could be used to search through content and remove anything that was found to be problematic under the guidelines given. If a big enough database of content was supplied to a machine learning algorithm, it would be possible for such an algorithm to independently scour the internet of unlawful content. At the moment, existing algorithms are simply not up to scratch. Earlier this year, it was revealed that paedophile rings were exchanging links to child pornography through YouTube. The algorithm had some kind of loophole that prevented the illicit material from being flagged, but more than that, the algorithm was aiding and enabling individuals to find such content easier.

By learning the watch patterns of users viewing the illegal material, and recognising that viewers would click on a similar series of videos, the YouTube algorithm actually ended up recommending further illegal content to viewers, facilitating the spread of links to child abuse over the site. Less extreme but nonetheless problematic, it is easy to fall down the rabbit hole of disinformation and conspiracy theories, as the algorithm doesn’t care what you watch, only that you keep watching; whatever has been analysed as the most effective way to keep you on the site is spat out at the you, largely disregarding the quality, content, or truth of the videos.

Perhaps even more challenging than the logistics of the response to harmful online content, though, is the most basic question of freedom. Critics say that the plans pave the way to regulating speech and ideas that might not be illegal but could very easy be judged as “harmful”. According to freedom of speech campaigners Article 19, implementing the government’s plan would “inevitable require them to proactively monitor their networks and take a restrictive approach to content removal”. If sites like Facebook adopted a better-safe-than-sorry attitude in order to avoid the risk of repercussions, it would “create an environment that encourages the censorship of legitimate expression”.

It is true that the internet, and more specifically social media sites, need some serious cleaning. But the questions of who decides what should be swept away, and what balance should be struck between freedom and preventing harm, are far harder to settle. 

Michael Keating