A month ago Eric Schmidt, the executive chairman of Alphabet (the recently created holding company behind Google*), authored a New York Times op-ed in which he floated the idea of creating “spell-checkers, but for hate and harassment.” In Schmidt’s view, a theoretical tool like this, based on automatic algorithms, would allow us to easily and efficiently scrub the internet of hateful and harmful language. But could something like a “spell-check for hate” really ever work?


Given the world’s hazy, diverse definitions of hate speech, many have criticized Schmidt, claiming that he’s basically proposing an arbitrary and fraught censorship regime—one that could easily be misused by authoritarians to narrow and control the internet (something Schmidt has worried over himself in previous writings). Yet even Schmidt’s critics can understand where he’s coming from. The internet is full of bile, which, even in good times, can restrict people’s speech and sense of freedom online; in bad times, the internet’s capacity for hate and abuse can amplify chronic fear and violence. To those desperate to stem this tide, the idea of a spell-checker for animus might sound attractive. They might argue that you could calibrate it in such a way as to avoid misuse and achieve optimal freedom (despite having no real clue what such a tool would actually look like or how it would operate).

Yet even setting aside the censorship issue, such a spell-checker would never work. Like all such tools it’d be easily thwarted most of the time. The rest of the time it would just mask persistent hate. Rather than impose top-down censorship in an alchemical bid to transform the world with fancy (but surface-level) tools, we ought to spend more time strengthening the internet’s already fairly developed anti-hate safeguards and altering the cultures that perpetuate hatred—tasks that others techies have already started to embrace.

The notion that we already have some tools for checking hate online might surprise some. After all, the internet seems like a total free-for-all in which people send around terrifying memes and videos and say whatever they damn well please. But just about every site out there (including Google) offers some system for flagging offensive or illegal content and actually does take efforts to remove it. Many people don’t engage with the sometimes invisible flag-and-takedown systems, and many would argue that reactive monitoring isn’t enough. And by and large, the tech world actually sees and agrees with that point. Many companies use proactive scrubbing services, although they’re fairly secretive about them. Maybe that’s because these services employ about 100,000 people, mostly poorly paid and in the developing world, who sit around all day reading the content on sites to find and rapidly eliminate nasty material—some even contracting (and receiving little treatment for) stress disorders because of all the foul, perverse, disdainful shit they see and protect the rest of us from. So as bad as the internet seems right now, efforts are already being made to protect us from the worst of its true contents. And we have the power to send clear signals about and make cases against the remainder, which can be evaluated carefully by actual humans.

Augmenting or replacing these services with an algorithm may sound like an attractive option. It’d be faster, for one thing, in responding to flags as well as proactively monitoring content. And it’d save thousands of people the psychological pain of sifting through the worst of humanity’s nastiness. But anyone who’s used even the most advanced spell-check and autocorrect services knows how inefficient and blunt they can be—how subtle deviations from a mechanical and predictable norm can throw them out of whack. Transfer that trend onto amorphous, slang-ridden speech and you can easily predict the potential for either overzealous or anemic protections.

Some might argue that we can build a better system—that (especially learning-enabled) algorithms are getting close to understanding and parsing organic human communication. That may be so, but Schmidt’s idea remains vague speculation with no specifics as to how we’d carry it out or at what end it would operate—with service providers, browsers, etc. But even if we take the most techno-optimistic point of view on such a system’s abilities, we can’t underestimate humanity’s ability to circumvent censorship tools. Consider China, which has perhaps 2 million people actively working to censor the web, employing their own spell-checker systems as well as a host of more human, subtle, and self-imposed tactics to clamp down on undesirable speech. Every time they tighten controls or outlaw a word, those wishing to circumvent regulations just create a new evasion program or imbue a new word with the same meaning and force as the old direct terminology. It’s a futile cat-and-mouse squabble.

You might think that kind of ingenuity is a feat of free speech fighting for its space to breathe. But evasion and innovation against censorship aren’t always righteous. Bile is a liquid. It seeps. Just like democratic language, hateful language will always find a way through the cracks. Some people will make it their mission to be offensive no matter the restrictions you put on them, just because they can, even if they don’t consider themselves hateful at heart. They do this because they want to push boundaries and exercise radically free speech that includes distasteful discourse. Hate, like Weebles, wobbles but doesn’t fall down—unless you rip the social ground out from under it.

The real answer to hate and bile online isn’t the cosmetic, easily circumvented change that a spell-checker system can bring about. Rather than the top-down engineering fix, you have to take a bottom-up approach, changing people’s behaviors and values. You do that not through individual conversion or censure, but by changing the moral pH of the water in which netizens swim, slowly and systematically altering the norms of communities, teaching the values that engender a distaste for violating those norms, and driving out objectionable behavior, language, and systems.

As I wind up saying all too often, systematic behavioral change is hard to effect. But some sites have shown that they’re willing to make the effort. Take Reddit, which in 2015 started to face the fact that it’s long been regarded as a bastion of vile hatred, driving some people away and deterring new users from coming to the site. Throughout the spring and summer, despite massive protests, Reddit succeeded in codifying new rules against harassment and incitement to hate or violence. They managed to eliminate some of their most vile communities and to cordon off the remainder, making sure that no one would stumble into objectionable (but not fully illegal) language without expressly consenting to see it with fair warning. Seepage occurred—new communities were formed and others migrated to Voat, a Reddit clone with no restrictions on hateful speech. But to Reddit’s 173 million users, the exodus highlighted how silly and small the few hundred thousand hateful users were, signaled the site’s commitment to changing its norms, and showed (when Voat crashed because of the influx of traffic) how untenable hate speech becomes when pushed to the edges of the internet.

Beyond the digital space, we’re also seeing deliberate shifts in the way we speak, think, and interact as a society. (Just look at the codification and mainstreaming of concepts like “fat-shaming” and “microaggressions” in the last year alone.) While it often seems like people say things online regardless of lived social consequences and prevailing attitudes (often thanks to anonymity), it’s possible that these real-life social changes will slowly erode the space for hate speech to land online. This, combined with existing flagging and preemptive scrubbing systems and the growing commitment of internet giants to combat hate speech, bode well for the slow but sustainable evolution of a safer digital space.

Admittedly, for those most hurt or aggrieved by online hate, slow social evolution doesn’t really cut it. But if we want changes to be more than cosmetic, they will take time. Still, as we wait for those changes to seep in, we can incentivize websites to pursue Reddit-like transformations or to strengthen their existing scrubbing programs. And we are: After accusing Facebook of dragging its feet in addressing hate speech reports this summer, Germany managed to secure agreements last month with all the major search and social media companies to evaluate and delete hate speech within 24 hours of detection. Given the discretion this involves, it really can’t be done with spell-checkers. But because it is necessary, all of the companies involved have demonstrated their willingness and ability to hire and train special review teams—a more robust solution.

Fortunately (given all the flaws visible in the concept), we’re unlikely to actually see a Schmidt-style spell-checker system anytime soon. Many suspect that the op-ed was just Schmidt’s personal view, not a Google Alphabet project in the waiting. Many more suspect that he might not actually be invested in such a system personally, but instead used the op-ed as a way to signal his and his industry’s willingness to play ball with politicians and activists concerned about terrorism and hate spreading online in the wake of countless incidents of both in 2015. As in the case of Reddit’s willingness to transform itself, that signal (if it is such) is worth something. More precisely, it’s worth a lot more than the proposed spell-checker system itself. All sorts of shortcomings can bedevil a system like that. But a strong signal of intention is an important step on the path toward the systematic social change—online and in real life—that can actually help to cut hate from our discourse.

*Which, come on, is still Google.

  • Man’s dog suddenly becomes protective of his wife, Internet clocks the reason right away
    Dogs have impressive observational powers.Photo credit: Canva

    Reddit user Girlfriendhatesmefor’s three-year-old pitbull, Otis, had recently become overprotective of his wife. So he asked the online community if they knew what might be wrong with the dog.

    “A week or two ago, my wife got some sort of stomach bug,” the Reddit user wrote under the subreddit /r/dogs. “She was really nauseous and ill for about a week. Otis is very in tune with her emotions (we once got in a fight and she was upset, I swear he was staring daggers at me lol) and during this time didn’t even want to leave her to go on walks. We thought it was adorable!”

    His wife soon felt better, butthe dog’s behavior didn’t change.

    pregnancy signs, dogs and pregnancy, pitbull behavior, pet intuition, dog overprotection, Reddit stories, viral Reddit, dog instincts, canine emotions, dog owner tips
    Otis knew before they did. Canva

    Girlfriendhatesmefor began to fear that Otis’ behavior may be an early sign of an aggression issue or an indication that the dog was hurt or sick.

    So he threw a question out to fellow Reddit users: “Has anyone else’s dog suddenly developed attachment/aggression issues? Any and all advice appreciated, even if it’s that we’re being paranoid!”

    The most popular response to his thread was by ZZBC.

    Any chance your wife is pregnant?

    ZZBC | Reddit

    The potential news hit Girlfriendhatesmefor like a ton of bricks. A few days later, Girlfriendhatesmefor posted an update and ZZBC was right!

    “The wifey is pregnant!” the father-to-be wrote. “Otis is still being overprotective but it all makes sense now! Thanks for all the advice and kind words! Sorry for the delayed reply, I didn’t check back until just now!”

    Redditors responded with similar experiences.

    Anecdotal I know but I swear my dog knew I was pregnant before I was. He was super clingy (more than normal) and was always resting his head on my belly.

    realityisworse | Reddit

    So why do dogs get overprotective when someone is pregnant?

    Jeff Werber, PhD, president and chief veterinarian of the Century Veterinary Group in Los Angeles, told Health.com that “dogs can also smell the hormonal changes going on in a woman’s body at that time.” He added the dog may “not understand that this new scent of your skin and breath is caused by a developing baby, but they will know that something is different with you—which might cause them to be more curious or attentive.”

    The big lesson here is to listen to your pets and to ask questions when their behavior abruptly changes. They may be trying to tell you something, and the news may be life-changing.

    This article originally appeared last year.

  • Throughout history, women have stood up and fought to break down barriers imposed on them from stereotypes and societal expectations. The trailblazers in these photos made history and redefined what a woman could be. In doing so, they paved the way for future generations to stand up and continue to fight for equality.

  • ,

    Why mass shootings spawn conspiracy theories

    Mass shootings and conspiracy theories have a long history.

    While conspiracy theories are not limited to any topic, there is one type of event that seems particularly likely to spark them: mass shootings, typically defined as attacks in which a shooter kills at least four other people.

    When one person kills many others in a single incident, particularly when it seems random, people naturally seek out answers for why the tragedy happened. After all, if a mass shooting is random, anyone can be a target.

    Pointing to some nefarious plan by a powerful group – such as the government – can be more comforting than the idea that the attack was the result of a disturbed or mentally ill individual who obtained a firearm legally.


Explore More Articles Stories

Articles

Man’s dog suddenly becomes protective of his wife, Internet clocks the reason right away

Articles

14 images of badass women who destroyed stereotypes and inspired future generations

Articles

Why mass shootings spawn conspiracy theories

Articles

11 hilarious posts describe the everyday struggles of being a woman