They might as well cross their fingers and hope people stop being awful
In an age where online harassment has quickly become commonplace, social media platforms have to adapt if they want to foster safe spaces. Take Twitter, for example, where rape and death threats run right alongside cat videos and #NationalIceCreamDay haikus. In recent weeks, Twitter came under fire for deftly weeding out copyright infringements while appearing to do very little to weed out hate speech, death threats, and vicious harassment.
While The Verge noted that the issue is more complicated than using one algorithm to police two distinct problems, it makes sense why users would be frustrated. Online harassment seems to be getting worse, and there’s no better evidence of that than the racist, sexist onslaught Leslie Jones faced following the release of Ghostbusters.
Even though the level of malice was well documented and publicized, giving Twitter moderators ample time to suspend accounts, tweets as disturbing as these are still live on the site.
@YaYaAshley @Lesdoggg Yeah hang in there girl! https://t.co/DuztAlUUp0— N8 (@N8)1468890644.0
If Twitter isn’t protecting high-profile celebrities with those coveted blue checkmarks beside their names, it’s hard to believe they’re doing anything at all for the average user. You could even go as far as to say the company is colluding with the abusers themselves by allowing such behavior. Reporter Jess Phillips certainly thinks so and wrote about her harrowing experience on the social media platform for The Telegraph. After receiving thousands of rape threats and reporting some of the abusers, Twitter responded (as they do in most similar cases) that they “reviewed the content and determined that it was not in violation of the Twitter rules,” leaving Phillips to manually block abusers and sort through rape threats by herself.
The Amy Schumer sketch, “New Twitter Button,” calls out this problem expertly with a parody in which Twitter releases an “I’m going to rape and kill you” button to help abusers save time, suggesting Twitter is not only allowing cruelty but actively supporting the abusers.
Credit: Comedy Central/Inside Amy Schumer
So what does Twitter plan to do about all of this? In a blog post published Thursday, the company said it’d make anti-harassment tools available to all users, a luxury that was previously only afforded to verified users—aka celebrities like Leslie Jones. In addition to this feature, Twitter announced it plans to expand the reach of its quality filter. As the company explained, the filter strives to “improve the quality of Tweets you see by using a variety of signals, such as account origin and behavior.”
As you might have expected, Twitter users are already tweeting up a storm about the vague new updates.
Twitter introducing a "quality filter". It has been a pleasure tweeting with you. https://t.co/Ouj1077kmS— Minovsky (@Minovsky)1471544144.0
A better Twitter quality filter: don't go on Twitter— David Burge (@David Burge)1471541742.0
"Quality filter" "low quality content" Just call it what it is: censorship. #Twitter #QualityFilter— Ethan Ralph (@Ethan Ralph)1471542310.0
Let’s be honest, these troll-blocking features aren’t new or even all that innovative. To add insult to injury, Twitter has yet to explain how you actually go about filtering your feed and notifications, offering the flimsy promise that there will be “updates in the future.”
This is all the more perplexing given that over the past six months Twitter has removed roughly 235,000 accounts promoting terrorism. In a separate blog post published Thursday, Twitter announced it started suspending accounts showing support of violent extremism in 2015, racking up a total of 360,000 disabled accounts in the past year. As the company stated in the post, “We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform.” If that is the case, then surely it can’t be that hard to apply the same strategy to users promoting rape and murder, can it?
It’ll be up to the users to decide whether they can put up with the abuse and support a company that is clearly unconcerned with violent misogyny and hate speech. And if not, it might be time to move on to a different platform altogether.