Skip Navigation
Fellows

Global Internet Forum to Counter Terrorism Transparency Report Raises More Questions Than Answers

The Global Internet Forum to Counter Terrorism, which facilitates the removal of “terrorist content” from social media, released its first transparency report in July. While this report is a necessary and welcome first step, much more is needed to allay concerns about the negative impacts on freedom of expression facilitated by the opaque decision-making of this private takedown regime.

September 25, 2019

The following originally appeared in Just Security

As United Nations Special Rapporteur Fionnuala Ní Aoláin warned in her report to the U.N. Human Rights Council earlier this year, the growing trend of deputizing private companies to proactively police loosely defined “terrorist content” generated by users can have a serious impact on fundamental rights and freedoms.

Despite these warnings, regulatory pressure on social media platforms to quickly remove this content is increasingly in vogue, especially in the European Union. In the United States, members of Congress are similarly calling for more action from social media companies. The Global Internet Forum to Counter Terrorism (GIFCT) and its “Hash Sharing Consortium” helps facilitate these removals, but its first transparency report, published in July, doesn’t go far enough in allaying concerns about the negative impacts on freedom of expression facilitated by the opaque decision-making of this private takedown regime.

The GIFCT is an industry-led effort launched by Facebook, Microsoft, Twitter and YouTube in response to ongoing regulatory and media pressure from Europe and the United States to stop the online spread of “terrorist content.” Earlier this week, the GIFCT announced that it will become “an independent organization supported by dedicated technology, counterterrorism and operations teams.” But it’s unclear how independent the new GIFCT will be. Governance of the GIFCT will still reside with an industry-led operating board that will consult with an independent advisory committee and a multi-stakeholder forum. Additionally, it appears that the GIFCT will still be financed by social media companies, although an executive director will lead fundraising efforts for particular projects. Lastly, the GIFCT’s announcement that it will work with an independent advisory committee that includes government representatives raises new concerns that nations could misuse their involvement for political purposes.

But under either iteration of the GIFCT, the initiative will continue to develop and deploy technologies to help platforms disrupt online terrorist activity. The centerpiece of this strategic focus is the “hash database” of terrorist content. First announced in 2016, this database contains “hashes” or digital fingerprints of “known terrorist images and videos” which are shared with each company that joins the GIFCT’s Hash Sharing Consortium. Once a company has access to the hash database, it can deploy tools to automatically spot duplicates of the same content when it is uploaded on their platform.

As of this summer, the database contains over 200,000 unique hashes, and a total of 13 companies are part of the GIFCT’s Hash Sharing Consortium. In many circles, the GIFCT’s hash database is cited as an example worth emulating. The European Commission’s proposed regulation on Preventing the Dissemination of Terrorist Content Online calls on platforms to expand the use of tools like the hash database in order to more effectively stop “known terrorist content from being uploaded on connected platforms.”

But dozens of civil society organizations, including the Brennan Center, have expressed concerns about the use of the hash database given its lack of transparency and the potential for removing content excessively. For one thing, we know almost nothing about what’s actually in the database, how often content is incorrectly flagged, or how often users are filing appeals.

Additionally, statements from Facebook suggest their automated removals are almost exclusively focused on content related to ISIS and al-Qaeda, placing Muslim and Middle Eastern communities at greater risk for over-removal. And while hashing makes it easier for platforms to spot exact duplicates of videos and images, automated tools are largely blind to contextual differences, almost guaranteeing that mistakes will happen along the way.

We raised similar concerns in the aftermath of the Christchurch attack, where reliance on hashing reached new heights, with platforms such as YouTube suspending human review in the name of expeditious removal. As the platform’s chief product officer acknowledged, unsupervised reliance on the algorithm meant that content such as news reporting or other valuable forms of expression were inevitably swept up by the automatic deletions.

In a joint statement announcing the GIFCT’s transparency report, Facebook, Twitter, YouTube and Microsoft said they heard civil society’s calls for greater transparency “loud and clear.” But while the transparency report includes some interesting information, it still falls far short of what is required.

What’s in the transparency report?

First, the transparency report provides new information about how the GIFCT defines “terrorist content.” We now know that at least some images and videos are hashed when a GIFCT member determines that a piece of content “relat[es] to organizations on the United Nations Terrorist Sanctions lists.” Previous announcements indicated that companies identified “extreme” terrorist material based solely on their Community Guidelines or Terms of Service.

It is somewhat reassuring that GIFCT is relying on lists developed by the U.N. rather than its own judgement of which groups should be treated as terrorists, but reliance on U.N. sanctions lists also highlights the political nature of takedowns as well as the risks for disparate impact on particular communities. The U.N. special rapporteur for countering terrorism has expressed concerns that sanctions lists are often based on political decisions from U.N. member states and lack sufficient due process protections.

And given that several of the U.N.’s terrorist sanctions lists specifically target al-Qaeda, the Taliban, and ISIS, the downstream risk of over-removal will disproportionately impact Muslim and Middle Eastern communities. As one Twitter employee explained, [w]hen a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS at the expense of inconveniencing some others.

Second, the transparency report explains how the GIFCT labels content in the hash database and provides a percentage breakdown among the various categories. According to the report, the hash database contains five types of content:

  1. Imminent Credible Threat (0.4%)
  2. Graphic Violence Against Defenseless People (4.8%)
  3. Glorification of Terrorist Acts (85.5%)
  4. Radicalization, Recruitment, and Instruction (9.1%)
  5. New Zealand Perpetrator Content (0.6%)

This breakdown is concerning because it reveals that most content in the database relates to the most ambiguous categories. On the one hand, imminent credible threats require the “public posting of a specific, imminent, credible threat of violence toward non-combatants and/or civilian infrastructure.” This category is the most narrowly defined and targeted to reach content we can all agree should come down. Yet it accounts for less than half a percent of the hash database, suggesting that the hash database mostly contains images and videos that may not be universally recognized as “terrorist content.”

On the other hand, glorification, which is defined as any content that “glorifies, praises, condones or celebrates attacks after the fact” accounts for 85.5 percent of all content in the database. The problem is that terms like “glorification,” “praise,” “condone,” and “celebrate” are notoriously imprecise and will almost inevitably capture expressions of general sympathy or an understanding for certain viewpoints, not to mention news reporting. Relying on imprecise labels not only makes it likely that content will be miscategorized, it also provides opening for misuse.

Recent history shows that governments are already relying on vaguely defined terms to suppress political speech. After Facebook reached an agreement with the Israeli government to address “incitement,” the platform removed content from Palestinian news organizations, civil society groups, journalists and activists. In India, the government pressured Twitter to block the accounts of activists, journalists and academics that were critical of the government’s military actions in Kashmir, relying on a law aimed at preventing incitement that threatens the security of the country. And in 2016, Facebook deleted the accounts and content from journalists, academics and local publishers reporting on the death of a Kashmiri separatist killed by the Indian army, claiming their content praised or supported terrorism.

Increased reliance on the GIFCT’s hash database will exacerbate these concerns, as automated tools are ill-equipped to account for context or intent but can scale removals to record levels. In one infamous blunder, the Syrian Archive, a civil society organization that seeks to preserve evidence of human rights abuses in Syria, reported that over 100,000 of its videos were removed from YouTube through the use of automated tools. Thus, the fact that most takedowns fall into categories that most easily bleed into the realm of political speech is far from reassuring.

Looking ahead

While this report is a necessary and welcome first step, much more is needed. Below are a few recommendations:

  • Transparency reports should break down content in the hash database by associated terrorist organization. This would help ensure that removals are not unilaterally focused on one type of terrorism, such as that promoted by al-Qaeda and ISIS.
  • The GIFCT should disclose any government involvement in the discovery and labeling of “terrorist content.” This is necessary to ensure that governments do not rely on platforms to outsource content removals they would be prohibited from carrying out themselves, particularly given governmental involvement in the GIFCT’s Independent Advisory Committee.
  • The hash database should be subject to an ongoing third-party audit assessing error rates and impact. The findings and recommendations should be incorporated into future transparency reports. Audits are necessary to ensure the hash database is not automating overbroad removals or silencing vulnerable communities at disproportionate rates. One way to facilitate audits is for the GIFCT to establish a mechanism for credentialed researchers to access the specific content inside the hash database as well as information regarding content removals.
  • The GIFCT should establish minimum transparency and accountability standards for all members of the Hash Sharing Consortium, including:
  • Disclosure as to whether they automatically remove content whenever it matches a hash, or if they flag for human review.
  • Establishment of a robust appeals process and redress mechanisms.
  • Publication of the total number of posts removed by the company, broken down by the related terrorist organization.
  • Written policies detailing how their approach to content filtering complies with privacy and data security requirements under applicable laws such as the General Data Protection Regulation.
  • For high-profile incidents such as the Christchurch attack, the GIFCT should issue a case study that documents the response. This is particularly important when major platforms suspend procedural safeguards against over-removal, such as human review.