NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Why National Security Officials and Tech Giants Must Team Up to Combat ISIS Online

The two sides have to overcome their mutual wariness to fight extremism without sacrificing civil rights.

Image reportedly made by Islamic State supporters

On January 8, heavies from the White House and U.S. intelligence and security sectors arrived in San Jose, California, for a meeting with America’s technorati. The midday rendezvous just south of Silicon Valley, at first unconfirmed by officials, was probably meant to be a low-key affair. But with makers and shakers like Tim Cook of Apple and National Intelligence Director James Clapper set to attend the tête-à-tête, it was almost inevitable that the meet-up made headlines. Relations between government officials and the tech elite have been frosty for years in light of past state surveillance programs and a present push for backdoor access to data-securing encryption—both dubbed ill-informed breaches of privacy in the name of national intelligence by many in the tech world. So just getting these folks, often at existential cross-interests, in a room, was momentous. But this wasn’t just some clearing-of-the-air summit. Instead, according to a leaked agenda, its purpose was to examine a pressing, worldwide problem:


“How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?”

The agenda features a few other items, but they’re all largely variations on that counterterror (and specifically anti-Islamic State) theme. To some, that may sound like an anachronistic, futile conversation. After all, both groups have been trying to disrupt the Islamic State’s digital presence for more than a year. Many would argue that even their best efforts haven’t had much effect. Yet as intractable as ISIS (and, to a lesser degree, other terrorists) can seem online, there are ways that the efforts of D.C. and Silicon Valley could better combine against them. These avenues will not be easy to achieve, though, and depend on both sides asking the right questions and offering the right resources for the fight.

Tim Cook, CEO of Apple. Image by Valery Marchive via Flickr.

Some might argue that our resources are better spent physically combating terror. But militants’ digital presence is a matter of legitimate concern—especially in the case of the Islamic State. Compared to the early days of Islamic extremist violence, characterized by grainy footage on secluded corners of the internet, ISIS is a slick outfit. Their (hundreds of) propaganda films are skillfully shot and distributed. Their messages are tailored to each market that they reach. Their campaigns, always pushing a cohesive brand of power and fear, are well coordinated with their physical campaigns—almost as soon as they made their famous incursion into Mosul in June 2014, they lit up social media with bullish coverage of their own exploits.

Thousands of ISIS promoters, some more active than others, broadcast their search-engine-optimized and well-crafted message around the world. They shift between communications platforms (e.g., Twitter to Telegraph) and adjust their phrasing as needed to stay in ideal touch with those who might be prone to their messaging. By so doing, they’ve arguably achieved an unprecedented level of direct contact with average folk around the world. As their propaganda draws in supporters by the thousands and fighters by the dozens from nations like the United States and France, they threaten our security and deal serious psychological blows. It’s no wonder, then, that we’re so concerned with finding the best ways to boot them offline and use their communications channels against them. Nor is it any wonder that we’re not the first nation to launch high-level meetings with tech companies toward these ends. Apparently France held similar talks on digital counteroffensives after November’s Paris attacks with local heads of some of the same global tech firms.

Given the use of these communications platforms for these insidious ends, questions arise over responsibilities, legal and otherwise, that these tech companies might bear in this situation. Some politicians have actually claimed that in pressing circumstances we should just shut down social media platforms, corners of the internet often and especially used by terrorists. But that’s almost never openly expressed (in the U.S. at least) because conceptually, we recognize digital spaces and products as vital venues for free speech. Legally, we treat them like tools—like, say, a hammer: Generally, we accept that the tool’s creators, manufacturers, or distributors are not culpable for its weaponization by third parties unless they facilitate it. Hence the widely perceived weakness of lawsuits like the one recently brought against Twitter, holding that the platform, via the terrorist chatter it carries, is partially responsible for the deaths of two Americans killed in Jordan in a November 2015 shooting spree. But even if we accept the premise that these platforms are important, inherently beneficial tools, that still leaves states to stumble about for new ways of contending with these powerful terror amplifiers that won’t violate speech or privacy rights for the wider public.

As such, the U.S. government has been trying to cope with the reality of terrorist speech in a young, malleable digital world since before most of us even caught wise to the Islamic State’s sophisticated social media tactics. In 2013, the State Department launched the English-language “Think Again Turn Away” (TATA) campaign, through which they tried to disseminate anti-radicalization narratives and mock militant Islamist propaganda. They’ve also launched community engagement programs and crowdsourcing anti-propaganda design competitions in attempts to generate and disseminate anti-radical messaging online and beyond. Unfortunately, these programs have had limited success (at best). Some, like TATA, which some believe just allows terrorists to make their case and mock the government in a new, public venue, are considered outright embarrassments. Some blame insider politicking; others blame a lack of insight. But whatever the cause, the State Department itself seems to recognize that it’s having a hell of a time taking on the deluge of quality Islamic State propaganda, and it doesn’t seem as if any other government agency is doing much better.

Many grumble that tech’s not doing enough to help the state, against which it seems to bristle, in the fight to limit terror online (hence the recent Twitter lawsuit). But social media sites in particular have made efforts, praised by some officials, against groups like the Islamic State. They’ve just been quiet—subtle changes to flag-and-takedown systems (which already exist on top of pretty robust content-screening programs) and private reports to law enforcement regarding truly egregious or worrying trends. The industry’s silence on these efforts is likely designed to avoid outcry against perceived speech violations, governmental collusion, and terrorist threats against its employees.

To wit: Facebook has amended its rules to ban any content praising terrorism, allowing it to more easily respond to and take down pro-Islamic State content. Twitter has banned indirect threats of violence as well as direct threats and improved the speed with which it responds to notifications of content in violation of its policies. And YouTube has expanded its flagger program, expediting responses to some takedown requests—including those likely made by law enforcement and intelligence officials. But while these programs have allowed social media sites to ban accounts and destroy trending terms—key methods by which the Islamic State speaks to the wider world—those gains are often short-lived.

Consider how little effect the grassroots hacktivist organization Anonymous had with its #OpParis effort after the Paris attacks. The group took down tens of thousands of supposedly pro-Islamic State accounts. They disrupted hashtags with Rickrolls and declared December 11 to be “ISIS Trolling Day” to frustrate the group. Yet they caught innocent accounts in the crossfire of their overly automated, aggressive detection systems. And the accounts they took down mostly resurfaced; some Islamic State-supporting accounts are so tenacious that they can be reborn under slightly different monikers hundreds of times. Legitimate takedown programs face the same challenges.

Some argue that even if accounts resurface, the effort required to regain a following and build new vocabularies for meaningful contact takes time, at the very least forcing the Islamic State’s social media communications to plateau. But in recent months we’ve come to understand that the Islamic State has developed tactics to avoid rapid detection, and that unbanned accounts simply alert the echo chamber of supporters and propaganda consumers where to reorient their attention. That can even involve completely changing platforms when one social media outlet becomes too hostile or tricky to navigate—all without ISIS missing a beat in its agenda.

Clearly both tech firms and the government are in need of new strategies to combat radicals online. Unfortunately. though. in their overtures to the digital world many politicians have shown a certain overly bold naïveté. Hillary Clinton called on sites to help the state take down Islamic State accounts, glossing over the precedent for and rights-related problems with that tactic. John McCain and company just keep harping on anti-encryption measures, despite concerns by security gurus that black-hat hackers and other unsavory actors could also exploit these quick fixes. (Encryption wasn’t anywhere on the January 8 agenda, but the presence of companies without a major social media stake, like Microsoft, made many suspect that the issue would be a part of the conversation.) And most egregiously, Donald Trump (in a classic, “series of tubes” moment) has recently proposed just shutting down sections of the internet.

This talk smacks of a silver-bullet mentality—like there’s some secret off switch tech firms know about but have failed to use for lack of the right pressure. This line of thought breeds understandable wariness among techies. Yet if these two interest groups can overcome their mistrust and preconceptions, it’s possible that they can, in fact, build new anti-radical programs (if not universal off switches).

Silicon Valley could do more, for example, to help monitor and compile data on pro-Islamic State material on its sites, including the things it can’t properly take down or report. That’s not to say the information on users should be turned over for prosecution. But if the government is overwhelmed, then a techno-assist in crunching and monitoring, rather than just reflexively trying to stamp out these communiqués, would be of use in understanding the shape of the increasingly discrete radical infosphere. Collaborative analysis could give a boost to cyber-forensics, targeting the most effective or dangerous accounts and using their chatter to foil living criminals.

A mutual boost in understanding of the radical infosphere can also help to cordon it off. Independently or through state actors, social media companies could, for instance, carry out targeted trolling against radical communications channels. Imagine, for instance, if the full weight of the state carried out a plan like 2015’s grassroots ISIS-chan scheme. In that bid, folks tried to post a ton of pictures of an anime caricature of the Islamic State, neutering its fear-mongering with kawaii cuteness, and driving ISIS posts from top image search results. Unfortunately, even with the power of anime, they couldn’t make a dent in the Islamic State’s search engine optimization. But with an assist from Google via its proprietary algorithms, the state could preserve news on ISIS while effectively trolling the group by replacing its photos and videos of brutality with, well, this:

Mutual analysis can also help us to better understand who’s at the greatest risk of radicalization. We know that it’s possible to make decent risk profiles—and that ISIS rarely winds up targeting the folks we would have assumed were at risk. From there we can build upon the example of few-and-far-between individual digital engagement and de-radicalization efforts that are today conducted by solitary activists or NGOs.

If I’m vague on details, that’s because I’m just lobbing some quasi-educated spitballs—speculating as to what might develop from (hopefully planned) future meetings between D.C. and Silicon Valley. Ideally, this engagement will continue with open minds rather than fears or preconceived agendas. Both the national security apparatus and our tech giants need to build bridges, filling the gaps that have stymied radical containment-effort concerns to now. Any programs resulting from these talks will never be able to get the Islamic State or other militants off the internet for good (a fact we can only hope participants on the government side appreciate). But even taking privacy concerns into consideration, the creation of new programs is not inconceivable, as some naysayers might imagine. These programs are both needed and have the potential to make a dent in modern radicalism; let’s hope they do.

More Stories on Good