Trolls Will Be Trolls


Abstaining from social media is not the way to confront its viciousness

No doubt, British politics has been vigorous and lively since the referendum on EU membership in 2016. Inevitably, debates on a range of issues which have sprung up have been plentiful and intense. In the modern day, social media has become one of the most prominent platforms for this discussion to take place.

But the quitting of social media by journalist Owen Jones highlights the vicious side that these online environments, rather unfortunately, often encompass, particularly when it hosts controversial and divisive topics. Much of the comments launched at Mr Jones mainly made reference to his political stances and his only recent disapproval of Labour Party leader Jeremy Corbyn. In his final post on Facebook before his departure, he recalled how people described him as “a cunt” and that he needed to “go fuck [himself].”

The openness and flexibility of the internet means that those who engage in debate online do so with a lack of accountability and responsibility for what they may say. It is very easy for one to create an online profile on Facebook, Twitter, YouTube or any other social media platform which does not necessarily depict their actual selves; they can portray themselves to be someone they are not. Some will comically arrange their digital profiles to portray themselves as Batman, for example. Alternatively, some will simply build a profile of a completely fictitious person. The effect of this is that whatever they may post, as unnecessarily callous or irrelevant as they may make it, their thoughts and comments do not have to be attached to or associated with them as would be the case in the real world. Contrastingly, in the virtual world, the complete absence of accountability for what one says can encourage the most vitriolic of comments. Although not all users will resort to this, the structure of the internet and social media platforms means that such hateful speech has become increasingly common.

Even so, there are two main reasons why cracking down on these internet ‘trolls’ is not a good idea. The first is that it would be completely impractical to do so. The German government has recently proposed new laws which impose fines on social networking sites that do not remove hate speech and other undesirable content which may be posted within 24 hours. The idea is to give an incentive for companies like Facebook and Twitter to find better ways to remove illegal content as opposed to relying heavily on the users themselves to report such content when they see it. But the reason why companies rely on users to do much of the policing is because it is very difficult to keep up with the vast amount of content that is generated every day. According to Statista, a statistics company, Facebook accumulated 1.86 billion users at the end of 2016. The number of photos, videos, messages and other content circulating around the platform, therefore, would be tremendous. Obligating a significantly smaller group of people working at Facebook to take on the heavy burden of monitoring everything that takes place on a daily basis is, therefore, quite unreasonable, and shows a lack of understanding of how the internet really works.

Although, advances in AI can help the situation. Google has been working on a machine learning tool called Perspective to crack down on hate speech on the internet. Admittedly though, in its early stages, a large amount of hateful comments still manage to slip under its radar, and so it will be a while yet before it can operate effectively. Once such technology is perfected however, it could be a great aid for social media companies in cleaning up the internet streets. But a second problem persists.

Trying to crack down on so-called hate speech opens up a slippery can of worms. Determining what exactly is hate speech, or whether a particular comment is intended to be hateful, is more difficult than some may believe. Of course, some statements are easier to assess than others. Personal attacks, for instance, are hardly ever relevant to a debate and are mostly used to distract from poor arguments. However, not every critical statement can be characterised as such. Some opinions may be more prickly in nature, but that does not necessarily mean they should be discounted. They can still add value to the debate and can still be conveyed with a genuine belief behind it. Thus, developing a framework or guideline as to what would constitute as hate speech would be a difficult and probably counteractive process. There will always be a temptation to widen the guidelines to censor a broader range of views and opinions. The unintended consequence of the laws proposed in Germany, therefore, is that it may do more to stifle good debate than encourage it.

In the end, what Mr Jones did was completely understandable. The personal attacks aimed at him are hardly justifiable. But by abstaining, he has only empowered the trolls that forced him out, for it means that their maliciousness is winning. This should not be the case. Instead, they should either be confronted with reasoned responses or ignored completely. Ultimately, healthy debate should be about allowing good ideas-backed by evidence and relevant facts and stripped as much as possible of unneeded emotionalism-to flourish and overpower those that are not so. Leaving the arena cannot ever achieve this. Furthermore, social media will be the future of political debate (the activity of Mr Trump on Twitter is evidence of this) whether people like Mr Jones like it or not. Thus, learn to embrace it, or risk becoming irrelevant.


Google’s attempt to police trolling and hate on the internet doesn’t work yet

The hounding of Owen Jones

Number of monthly active Facebook users worldwide as of 4th quarter 2016 (in millions)

Facebook and Twitter could pay the price for hate speech