Skip Navigation
Analysis

EU ‘Terrorist Content’ Proposal Sets Dire Example for Free Speech Online

The European Commission’s draft regulation on “terrorist content” would effectively create an automatic takedown regime with practically no due process protections, and its requirements are likely to influence how social media platforms will police the speech of Americans in the future.

March 5, 2019

The following originally appeared in Just Security

Countries around the world are seeking to exert more control over content on the internet – and, by extension, their citizens. Europe, unfortunately, is providing them with a blueprint. In two recent examples, overbroad regulations incentivize social media platforms to remove speech first and ask questions later.

Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” The removal obligation is triggered not by a court order, but by complaints from users. Companies must remove the posts within 24 hours or seven days (depending on whether the post is considered to be “manifestly unlawful” or merely “unlawful), facing steep fines if they fail to do so.

While NetzDG required companies to create mechanisms to lodge complaints about posts, it failed to include parallel requirements for challenging removals. Within hours after it went into effect, warnings that the law would sweep too broadly were vindicated: Twitter deleted tweets from a far-right politician, as well as those of a satirical magazine that made fun of her. Major political parties in Germany have already recognized the need for changes.

Undeterred by the problems with NetzDG, however, the European Commission in September introduced a draft regulation on “terrorist content” that would lead to further censoring of the Internet by governments, with little regard for freedom of expression or due process. The proposed regulation, which has been severely criticized by three United Nations  special rapporteurs and by civil society organizations, would require a range of platforms to remove a post within one hour of receiving notice from an authorized national authority that it constitutes illegal “terrorist content.”

Incentivizing Overbroad Removals of Speech

The draft regulation, which the European Commission is trying to get through the European Parliament before parliamentary elections in May, relies on an overly expansive definition of terrorist content. The first part of the definition – “[i]nciting or advocating, including by glorifying, the commission of terrorist offences, thereby causing a danger that such acts be committed” – not only includes the poorly defined concept of “glorification” of terrorism, but could easily be read to encompass indirect advocacy more broadly. And, while it is framed in the language of incitement, which is accepted in international law as a proper object for regulation, it fails to include the need for a robust connection between speech and action (e.g., imminence or likelihood of harm). Instead, it requires only that the speech causes “a danger” that offences would be committed.

According to other parts of the definition, terrorist content includes speech that could be construed as “encouraging the contribution to terrorist offences” and “promoting the activities of a terrorist group.” These categories are sufficiently broad to encompass expressions of general sympathy or understanding for certain acts or viewpoints, particularly as there is no requirement of intent to further terrorism. Indeed, the definition is even expansive enough to encompass press reporting on the activities of terrorist groups.

By requiring that posts be removed within one hour of notification, the draft regulation would effectively create an automatic takedown regime with practically no due process protections. Takedown notices don’t have to be issued by a court or other independent body; any number of law enforcement agencies or administrative bodies would be authorized to demand that platforms remove certain types of speech. And the short timeline, as the U.N.’s human rights special rapporteurs explained, has enormous practical implications:

The accelerated timeline does not allow Internet platforms sufficient time to examine the request in any detail, [which would be] required to comply with the sub-contracted human rights responsibilities that fall to them by virtue of State mandates on takedown. This deficiency is compounded by the fact that even if the respective company or the content provider decide to request a detailed statement of reasons, this does not suspend the execution of the order, nor does the filing of an appeal. Both the extremely short timeframe and the threat of penalties are likely to incentivize platforms to err on the side of caution and remove content that is legitimate or lawful.

The proposed regulation requires companies to establish internal appeals procedures for takedowns, but since these would be judged under substantively vague standards, they are unlikely to ameliorate the defects in the regulation. And it makes no provision for post facto judicial review, relegating mention of this important safeguard to the accompanying explanatory memorandum.

Even if such review were added, however, it is likely to have only limited practical impact. Few users are likely to invest the time or resources to initiate legal proceedings to reinstate their posts. In practice, most removal orders will be unchallenged, allowing governments to do an end-run around legal restrictions on their censorship authorities.

The Danger of Algorithms and Discriminatory Enforcement

To implement these provisions, large companies essentially would be forced to use algorithms to filter content. While companies like Facebook and Twitter have touted their use of such tools to identify some types of content and accounts for removal, algorithms are not suited to many of the situations for which they are proposed. In many instances, they reflect bias present in the data used to train them.

Algorithms also are terrible at understanding context. The types of speech that the draft regulation is asking platforms to remove — namely, “terrorist content” — is notoriously difficult to define, and efforts to identify it likely will capture many types of political speech. Indeed, the highest reported accuracy rates for natural language processing programs are about 80 percent, meaning that at least one in five posts would be categorized inaccurately.

Groups working to document human rights abuses, for example, oppose the proposed regulation based on experience:

[The] use of machine-learning algorithms to detect “extremist content” has created hundreds of thousands of false positives and damaged this body of human rights content. One group, Syrian Archive, observed that after Google instituted a machine-learning algorithm to “more quickly identify and remove extremist and terrorism-related content” in June of 2017, hundreds of thousands of videos went missing. This included not only videos created by perpetrators of human rights abuses, but also documentation of shellings by victims, and even videos of demonstrations.

Finally, the draft regulation risks an escalation of discriminatory enforcement. Despite the prevalence of far-right violence, the terrorism label is by and large reserved for violence committed in the name of Islam, and enforcement of speech restrictions seems to be similarly focused.

For example, Facebook’s 2018 report on its efforts to address “terrorist content” under its community standards indicated that it had removed 1.9 million pieces of ISIS and al-Qaeda content in the first quarter of the year. It says nothing about other types of terrorism. The attention to one type of political violence over all others creates serious risks that the speech of Muslims and Arabs will be disproportionately suppressed.

Attempts to Address Concerns

The parliamentary committees and other bodies charged with providing input on the draft directive have tried to address the concerns listed above. In particular, they have suggested that only judicial authorities should be authorized to issue removal orders, articulated the right of individuals to petition courts to challenge removals, and replaced the one-hour removal requirement with more flexible timeframes.

These suggestions would mitigate several of the harms of the current draft. Unfortunately, the proposals aimed at narrowing the definition of “terrorist content” do not go far enough, and still make too much speech potentially subject to regulation. And proposed modifications to the obligation to proactively deploy algorithms seem to keep the essence of that requirement in place.

The serious deficiencies in the European approach to terrorist content have broad implications beyond its borders. NetzDG has already served as a template for authoritarian governments and others, including RussiaSingapore, the PhilippinesVenezuela, and Kenya, as they seek to crack down on speech. The draft regulation bears close scrutiny in the U.S. as well: its requirements are likely to influence how platforms will police the speech of Americans in the future.