Skip Navigation
Fellows

Why the Social Media Giants Can’t Ever Wipe Out ISIS Propaganda

Asking social media companies to monitor and report all terrorist content is a nearly impossible task.

December 10, 2015

Cross-posted on Fortune

Last week’s brutal attack in San Bernardino led to renewed calls for requiring social media companies to root out terrorist content. Republican candidate Donald Trump suggested asking Bill Gates to “close that Internet up in some way” to combat online recruitment by terrorists. Democratic presidential candidate Hillary Clinton urged tech companies in Silicon Valley to “disrupt” ISIS by blocking or taking down militant websites and videos. And two senators just introduced a bill that would require tech companies to alert federal law enforcement about online terrorist activity.

The tough talk sounds appealing, but making it harder for terrorists also has costs for America. Wholesale monitoring will seriously undermine the Internet’s promise of a free and open platform for the exchange of knowledge and ideas. It’s also impractical and unlikely to prevent terrorists from using social media.

Social media giants like Facebook, Twitter, and YouTube rely on users to identify offensive content, including terrorism-related, which is reviewed and may be deleted. The exact standards for making these assessments are not known, and some are more pro-active than others in flagging offensive material. And, even though companies are not legally required to report to the government, according to FBI Director James Comey, they are “pretty good” at telling the government when they come across something of serious concern.

But asking companies to actively monitor all traffic that flows through their sites is a gargantuan, and likely impossible, task: Some 400 hours of video are uploaded to YouTube every minute, Twitter estimates that users post over 500 million tweets per day, while on Facebook, users update 293,000 statuses and upload 136,000 photos every 60 seconds.

Nor is terrorist content easily identifiable. There are up to 150 definitions of terrorism in U.S. law. Trying to get a handle on words and images rather than actions makes this task infinitely more complex. A Twitter user affiliated with a known terrorist group exhorting Muslims in the West to kill their fellow citizens counts might easily qualify as a “terrorist.” But what about a video showing the force-feeding of Guantanamo inmates? It could theoretically make someone angry enough to turn to terrorism (in a court case, the government argued against releasing photos and videos of a detainee at that prison because they could be used by terrorist groups to incite anti-American sentiment, recruit members, and raise funds). And what about the case of Dylann Roof, who allegedly killed nine at an African-American church, after reportedly reading the segregationist Council of Conservative Citizens? Does that mean the site should be blocked?

If social media companies must identify and remove terrorist content, they’re going to err on the side of caution. They are unlikely to take a chance on liability, not to mention the public relations fallout, if any content on their sites was even remotely related to a terrorist plot. Facebook reportedly has already taken down informational pages about terrorist groups as part of the push to remove content that can be regarded as promoting terrorism.

Even actual terrorist propaganda has value. Researchers and analysts use it to develop responses to terrorism. Just this summer, the British Library prompted an outcry from academics when it announced that it would not acquire or give access to a digital archive of original Taliban print and audio materials. The library was afraid that it would run afoul of U.K. laws banning terrorist material. It is counter-productive to deny this type of access and companies can’t very well be examining the motives of every person who wants to look at ISIS’s Dabiq magazine.

Knowing that companies are watching over Facebook posts and Tweets to check whether it crosses some hazy line into terrorist content will discourage the free exchange of views and information. Given that terrorism is a political crime, talk of U.S. foreign policy, the wars in the Middle East, climate change, race, religion and the like – the types of topics that are the reason for free speech protections – would be particularly suspect. The global marketplace of ideas, which the Internet embodies, would shrink.

Attempts at ridding the Internet of terrorist material are futile, leading to an endless game of whack-a-mole. If popular social media platforms shut down a terrorist account, new ones will pop up tomorrow. Twitter has repeatedly tried to cut off the pro-ISIS account of a group called Asawitiri Media – now on its 335thiteration.

Of course, it’s scary that propaganda posted on the Internet might play a part in motivating the rare terrorist attacks of the last several years. But charging private companies with actively scanning our online lives on the basis of vaguely defined notions of terrorism is not the right solution. That was provided us in a 1927 decision of the U.S. Supreme Court: “the remedy to be applied is more speech, not enforced silence.”

(Photo: Flickr/JasonHowle)