NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Could Eric Schmidt’s ‘Spell-Check for Hate’ Ever Really Work?

A top-down engineering solution can’t defeat internet bile; fighting hate requires systematic behavioral change.

Image via Flickr user lvar

A month ago Eric Schmidt, the executive chairman of Alphabet (the recently created holding company behind Google*), authored a New York Times op-ed in which he floated the idea of creating “spell-checkers, but for hate and harassment.” In Schmidt’s view, a theoretical tool like this, based on automatic algorithms, would allow us to easily and efficiently scrub the internet of hateful and harmful language. But could something like a “spell-check for hate” really ever work?


Given the world’s hazy, diverse definitions of hate speech, many have criticized Schmidt, claiming that he’s basically proposing an arbitrary and fraught censorship regime—one that could easily be misused by authoritarians to narrow and control the internet (something Schmidt has worried over himself in previous writings). Yet even Schmidt’s critics can understand where he’s coming from. The internet is full of bile, which, even in good times, can restrict people’s speech and sense of freedom online; in bad times, the internet’s capacity for hate and abuse can amplify chronic fear and violence. To those desperate to stem this tide, the idea of a spell-checker for animus might sound attractive. They might argue that you could calibrate it in such a way as to avoid misuse and achieve optimal freedom (despite having no real clue what such a tool would actually look like or how it would operate).

Yet even setting aside the censorship issue, such a spell-checker would never work. Like all such tools it’d be easily thwarted most of the time. The rest of the time it would just mask persistent hate. Rather than impose top-down censorship in an alchemical bid to transform the world with fancy (but surface-level) tools, we ought to spend more time strengthening the internet’s already fairly developed anti-hate safeguards and altering the cultures that perpetuate hatred—tasks that others techies have already started to embrace.

The notion that we already have some tools for checking hate online might surprise some. After all, the internet seems like a total free-for-all in which people send around terrifying memes and videos and say whatever they damn well please. But just about every site out there (including Google) offers some system for flagging offensive or illegal content and actually does take efforts to remove it. Many people don’t engage with the sometimes invisible flag-and-takedown systems, and many would argue that reactive monitoring isn’t enough. And by and large, the tech world actually sees and agrees with that point. Many companies use proactive scrubbing services, although they’re fairly secretive about them. Maybe that’s because these services employ about 100,000 people, mostly poorly paid and in the developing world, who sit around all day reading the content on sites to find and rapidly eliminate nasty material—some even contracting (and receiving little treatment for) stress disorders because of all the foul, perverse, disdainful shit they see and protect the rest of us from. So as bad as the internet seems right now, efforts are already being made to protect us from the worst of its true contents. And we have the power to send clear signals about and make cases against the remainder, which can be evaluated carefully by actual humans.

Perils of spell-check. Image via Flickr user Bryan Mason

Augmenting or replacing these services with an algorithm may sound like an attractive option. It’d be faster, for one thing, in responding to flags as well as proactively monitoring content. And it’d save thousands of people the psychological pain of sifting through the worst of humanity’s nastiness. But anyone who’s used even the most advanced spell-check and autocorrect services knows how inefficient and blunt they can be—how subtle deviations from a mechanical and predictable norm can throw them out of whack. Transfer that trend onto amorphous, slang-ridden speech and you can easily predict the potential for either overzealous or anemic protections.

Some might argue that we can build a better system—that (especially learning-enabled) algorithms are getting close to understanding and parsing organic human communication. That may be so, but Schmidt’s idea remains vague speculation with no specifics as to how we’d carry it out or at what end it would operate—with service providers, browsers, etc. But even if we take the most techno-optimistic point of view on such a system’s abilities, we can’t underestimate humanity’s ability to circumvent censorship tools. Consider China, which has perhaps 2 million people actively working to censor the web, employing their own spell-checker systems as well as a host of more human, subtle, and self-imposed tactics to clamp down on undesirable speech. Every time they tighten controls or outlaw a word, those wishing to circumvent regulations just create a new evasion program or imbue a new word with the same meaning and force as the old direct terminology. It’s a futile cat-and-mouse squabble.

You might think that kind of ingenuity is a feat of free speech fighting for its space to breathe. But evasion and innovation against censorship aren’t always righteous. Bile is a liquid. It seeps. Just like democratic language, hateful language will always find a way through the cracks. Some people will make it their mission to be offensive no matter the restrictions you put on them, just because they can, even if they don’t consider themselves hateful at heart. They do this because they want to push boundaries and exercise radically free speech that includes distasteful discourse. Hate, like Weebles, wobbles but doesn’t fall down—unless you rip the social ground out from under it.

The real answer to hate and bile online isn’t the cosmetic, easily circumvented change that a spell-checker system can bring about. Rather than the top-down engineering fix, you have to take a bottom-up approach, changing people’s behaviors and values. You do that not through individual conversion or censure, but by changing the moral pH of the water in which netizens swim, slowly and systematically altering the norms of communities, teaching the values that engender a distaste for violating those norms, and driving out objectionable behavior, language, and systems.

Eric Schmidt. Image by Sven Manguard via Wikimedia Commons

As I wind up saying all too often, systematic behavioral change is hard to effect. But some sites have shown that they’re willing to make the effort. Take Reddit, which in 2015 started to face the fact that it’s long been regarded as a bastion of vile hatred, driving some people away and deterring new users from coming to the site. Throughout the spring and summer, despite massive protests, Reddit succeeded in codifying new rules against harassment and incitement to hate or violence. They managed to eliminate some of their most vile communities and to cordon off the remainder, making sure that no one would stumble into objectionable (but not fully illegal) language without expressly consenting to see it with fair warning. Seepage occurred—new communities were formed and others migrated to Voat, a Reddit clone with no restrictions on hateful speech. But to Reddit’s 173 million users, the exodus highlighted how silly and small the few hundred thousand hateful users were, signaled the site’s commitment to changing its norms, and showed (when Voat crashed because of the influx of traffic) how untenable hate speech becomes when pushed to the edges of the internet.

Beyond the digital space, we’re also seeing deliberate shifts in the way we speak, think, and interact as a society. (Just look at the codification and mainstreaming of concepts like “fat-shaming” and “microaggressions” in the last year alone.) While it often seems like people say things online regardless of lived social consequences and prevailing attitudes (often thanks to anonymity), it’s possible that these real-life social changes will slowly erode the space for hate speech to land online. This, combined with existing flagging and preemptive scrubbing systems and the growing commitment of internet giants to combat hate speech, bode well for the slow but sustainable evolution of a safer digital space.

Admittedly, for those most hurt or aggrieved by online hate, slow social evolution doesn’t really cut it. But if we want changes to be more than cosmetic, they will take time. Still, as we wait for those changes to seep in, we can incentivize websites to pursue Reddit-like transformations or to strengthen their existing scrubbing programs. And we are: After accusing Facebook of dragging its feet in addressing hate speech reports this summer, Germany managed to secure agreements last month with all the major search and social media companies to evaluate and delete hate speech within 24 hours of detection. Given the discretion this involves, it really can’t be done with spell-checkers. But because it is necessary, all of the companies involved have demonstrated their willingness and ability to hire and train special review teams—a more robust solution.

Fortunately (given all the flaws visible in the concept), we’re unlikely to actually see a Schmidt-style spell-checker system anytime soon. Many suspect that the op-ed was just Schmidt’s personal view, not a Google Alphabet project in the waiting. Many more suspect that he might not actually be invested in such a system personally, but instead used the op-ed as a way to signal his and his industry’s willingness to play ball with politicians and activists concerned about terrorism and hate spreading online in the wake of countless incidents of both in 2015. As in the case of Reddit’s willingness to transform itself, that signal (if it is such) is worth something. More precisely, it’s worth a lot more than the proposed spell-checker system itself. All sorts of shortcomings can bedevil a system like that. But a strong signal of intention is an important step on the path toward the systematic social change—online and in real life—that can actually help to cut hate from our discourse.

*Which, come on, is still Google.

More Stories on Good