Twitter tackles user abuse

Qfy0kRIP_400x400.jpg

On the 15th of November, Twitter released its new measures to tackle abusive behaviour and harassment on its site. The update allows users to block tweets in their mentions containing specific words that they have chosen to ‘mute’. The company has also introduced a new reporting category, to make it easier for users to report offensive comments, and has re-trained hundreds of its moderation employees to better understand what to look for when analysing reported content.

The microblogging site has faced criticism in recent years over its poor handling of online abuse. One stand-out case in particular is that of Ghostbusters actress and comedian, Leslie Jones. After the release of the Ghostbusters remake, she suffered a torrent of racist and misogynistic abuse from many trolls, lead by Breitbart editor, Milo Yiannopolous, who has since been banned from using the site. Jones complained that Twitter’s moderators did little to block or punish the people who harassed her.

Last year, Twitter technically banned hateful conduct on its site, but this resulted in little improvement. Its staff believe that this new feature will allow users to more effectively monitor what they see on their own accounts and timelines. This ‘self-moderation’ is an interesting move by the company, who were cautious of appearing to limit freedom of expression. As they have not chosen to enforce a blanket ban on certain words or phrases that may be considered hate speech, they have allowed users to protect themselves from offensive material, while not limiting the expression of others.

Many users and journalists, however, disagree with this idea, claiming that the action of muting a particular word is, in itself, an act of censorship. They believe that in order to have serious debate about hate speech and its affect on society, we need to expose ourselves to it, and discuss it freely.

But is Twitter the correct platform on which to encourage debate? With its 140 character limit, surely this microblogging site is incapable of allowing users to fully express what they mean. Similarly, muting a particular word on one account does not forbid another from tweeting it, and so can not be considered censorship.

So is this all too little too late for Twitter? With over 317 million users, moderation of every tweet is impossible. The bad press surrounding cases of abuse on the site has also proved to be a deterrent, with many celebrities deactivating their accounts. The lack of hands on moderation has fostered an ‘anything goes’ culture, in which people feel safe attacking other users behind their veil of anonymity. Twitter hopes to change this, stating that they hope these new measures will create a “culture of collective support” on the site.

I spoke to student and Twitter user Chris about the matter:

Chris: I honestly think that Twitter is dying out, so I think they should have implemented something like this ages ago, it doesn’t make sense to do it now when less people are using it. It is actually a really good idea, but you have been able to block and mute accounts since the beginning so this feature should have come with that as well.