Twitter is testing a feature which has been designed to crack down on ‘trolling’ and abuse. The social media platform’s new Safety Mode will flag accounts which use hateful remarks, or those accounts which pepper other accounts with uninvited comments.
Although, generally speaking, one person’s banter is another person’s abuse, social media platform Twitter has definitely been a bit of a minefield and there have been instances of blatant hate-speak. Trolling (which can be defined as the act of leaving an insulting message on the internet in order to annoy someone) has been a consistent problem – and now, those who indulge in it will find that their account is blocked for a period of seven days.
Once a user enables Safety Mode, it will work automatically. The feature can be accessed through Settings, and when in use it will assess the Tweet’s content and the relationship between the Tweet author and the account replying to their Tweet. Those accounts which are followed by the user, or often interacted with, won’t be subject to the autoblock feature.
Shop for smartphones at Ebuyer
Katy Minshall, Head of Twitter UK Public Policy, told the BBC: “While we have made strides in giving people greater control over their safety experience, there is always more to be done. Safety Mode allows you to automatically reduce disruptive interactions, which in turn improves the health of the public conversation.”
Though the Twitter feature is welcome, and will go quite some way to attending to the recurring issues of abuse, trolling and bullying, another part of the solution is actually becoming mindful enough to moderate our own online behaviour. Just as you would read over an email you were about to send your boss to check for possible unintended connotations or offence, it’s always worth doing the same when Tweeting – or, in fact, posting on any form of social media or website.
The Twitter platform‘s Safety Mode looks like it will be a valuable feature to ensure the online safety of participants, but any algorithm dealing with different levels of human nuance is likely to throw up some interesting accidental issues.
The history of the internet is littered with conundrums regarding behaviour and automatic ‘policing’ causing ‘false positives’. A fairly extreme example in the early days of online life became known as ‘The Scunthorpe Problem’ – a term which is now used to cover all instances of the internet ‘misunderstanding’ language.
Shop for tablets at Ebuyer
The small northern UK town was regularly excluded from websites, e-mails, forum posts or search engine results by a spam filter because it contains a string of letters which appears to have an obscene meaning. Other examples include Penistone in South Yorkshire, Lightwater in Surrey, Clitheroe in Lancashire and Shitterton in Dorset.
The problem for these places (and for other words which also feature apparently offensive letter-strings) continues to this day, because creating effective obscenity filters depends on its ability to understand a word in context. It will be interesting to see how Twitter’s Safety Mode copes, going forward…
Where to find Ebuyer on social media