Skip Navigation
Analysis

So, What Does Facebook Take Down? The Secret List of ‘Dangerous’ Individuals and Organizations

The company needs to drop the fiction that U.S. law requires its current approach to content moderation.

Last Updated: November 8, 2021
Published: November 4, 2021
person sitting at a computer with facebook logo on the screen
Getty/NurPhoto

This article originally appeared at Just Security.

Over the past few weeks, the public and policymakers have grappled with leaks from a Facebook* whistleblower about the company’s decisions to allow hate speechmisinformation, and other forms of “lawful but awful” content to proliferate. But amid this debate about what Facebook has allowed to remain on its platforms, new information has emerged about the content that the company routinely removes from the platform on the theory that it is legally required to do so. Because Facebook maintains that these takedowns are legally mandated, it has erected a wall around debates about those decisions. Its own Oversight Board is barred from hearing cases where a decision to reinstate a post “could” lead to “criminal liability” or “adverse governmental action.” This rule insulates from review decisions about how Facebook interprets its legal obligations and how it evaluates the risks of a country’s reaction.

Recently, the Intercept published the lists of people and groups that Facebook deems “dangerous,” a jumble of well-known terrorist groups, street gangs, drug cartels, media organizations, and far-right militias that are covered under the company’s Dangerous Individuals and Organizations (DIO) Community Standard. Facebook itself had only disclosed the existence of these lists and their overall policy rationale, but not the specific content of the lists. Upon publication of the lists, Facebook defended them with a familiar refrain; its departing Director of Counterterrorism and Dangerous Organizations stated that the company “has a legal obligation to follow U.S. law related to entities designated as foreign terrorist organizations, global terrorists and other sanctioned parties.”

In fact, Facebook’s lists include hundreds of groups and individuals that have not been designated or sanctioned by the U.S. government. Many of them are unsavory and under U.S. law Facebook is legally permitted to keep them off its platform, but no U.S. law requires it to do so.

Even for those groups that are drawn from U.S. government designations, the question remains: does U.S. law require social media platforms to take down praise and positive comments about groups and individuals if the government designates them as terrorists? Despite the repeated assertions of Facebook and Instagram, the answer is no.

The issue has significance far beyond the United States too. Does Facebook, for example, consider itself bound by Israel’s recent, controversial designation of six human rights groups as terrorist organizations? Only when Facebook acknowledges that its decisions on who to include on its banned lists are its own can the company (and its peers, many of which also rely on similar lists) address the harms from this approach – a form of content moderation that is designed to capture a swath of speech that has little relationship to real world harms.

Facebook’s DIO Lists

Since at least 2018, Facebook – like other social media platforms – has held secret lists of groups and individuals that it labeled “dangerous” about which it restricts discussion and content. In June 2021, after a decision from its Oversight Board, Facebook publicly explained that its DIO lists are divided into three tiers. It removes praise, support, and representation of Tier 1 entities, which it describes as those who “engage in serious offline harms” such as “terrorist, hate, and criminal organizations” and the perpetrators of “terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, multiple murders, or hate crimes.” It permits posts condemning Tier 1 and since February 2020 has claimed to allow “neutral[] discuss[ion]” that is obvious from the post’s context. Tier 2, made up of what Facebook calls “Violent Non-State Actors,” encompasses “[e]ntities that engage in violence against state or military actors but do not generally target civilians.” For these, Facebook removes support and praise of any violence they commit, but it permits praise of their non-violent efforts. Finally, Tier 3 is categorized as “Militarized Social Movements, Violence-Inducing Conspiracy Networks, and Hate Banned Entities,” which repeatedly violate Facebook Standards on or off the platform, “but have not necessarily engaged in violence to date or advocated for violence against others based on their protected characteristics.”

Until the recent reporting by the Intercept, the public only knew about parts of Facebook’s lists. The company has routinely indicated in its transparency reports how many Al Qaeda/ISIS-related posts it takes down and has more recently provided similar numbers for organized hate removals. Perhaps unsurprisingly, the Intercept’s reporting revealed that Facebook’s terrorism list – like the U.S. government’s lists of designated Foreign Terrorist Organizations (FTOs) and Specially Designated Terrorist Groups (SDGTs) – is dominated by Muslim, Arab, and South Asian groups and individuals. By contrast, despite the known and growing threat of violence posed by far right groups, the company largely places them in its lower-scrutiny Tier 3. This emphasis leads to unequal policing of content from these groups while, at the same time ignoring hate speech and harassment targeting Muslims and other marginalized communities.

Applying these rules also has real consequences for political speech. For example, as Jillian C. York points out, “People living in locales where so-called terrorist groups play a role in governance need to be able to discuss those groups with nuance, and Facebook’s policy doesn’t allow for that.” U.S. designations can also outlast local realities, causing Facebook to limit speech about groups that have forsworn violence. For example, Colombia’s FARC, which signed peace accords in 2016, remains designated by both the U.S. government and Facebook. Afghanistan’s now-ruling Taliban has also promised peace – and has reportedly earned humanitarian aid from the U.S. government in return – but still cannot be discussed openly on the platform. And the lists published by the Intercept include several groups characterized as Houthis or their affiliates, despite the Biden Administration’s removal of the group’s FTO designation in February 2021.

Facebook guidance (previously reported by the Guardian) on how to apply its content moderation standards also released by the Intercept shows that the touchstone for its decisions is not necessarily violence, but violence by certain groups, in certain circumstances. The guidance instructs moderators to allow calls for violence against locations “no smaller than a village,” or for “cruel or unusual punishment” for crimes Facebook recognizes. In a comment to the Intercept, Facebook’s spokesperson provided an example of the latter: “We should kill Osama bin Laden.” Setting aside the question of where best to draw the line in moderating content that advocates violence, these examples provide a window into what kinds of violence (and calls for violence) Facebook considers protected political speech and what kinds it considers too dangerous to host. It is worth noting, for example, that Osama bin Laden was “smaller than a village.”

Debunking Facebook’s Claim of Legal Obligation

While social media platforms rarely articulate their alleged legal obligations with specificity, the likely concern is a set of U.S. laws that prohibit providing “material support” to foreign terrorist groups, such as FTOs and SDGTs. These laws are broadly drafted, to be sure: 18 U.S.C. § 2339B prohibits “knowingly providing material support to foreign terrorist organizations,” and a companion provision defines material support to include “any . . . service.” U.S. prosecutors have deployed these laws aggressively against American Muslims, with online speech making up much of the evidence in some cases.

But Facebook is not a government responsible for enforcing material support statutes and is itself at negligible risk of being prosecuted under these laws, both because its conduct doesn’t meet the legal standards and because it enjoys tremendous political protection relative to the ordinary user.

In its 2010 decision in Holder v. Humanitarian Law Project, the Supreme Court upheld 18 U.S.C. § 2339B in the face of a First Amendment challenge, finding that even support of the peaceful activities of a designated group could be prosecuted under the statute on the theory that these contributions would free up resources for “the group’s violent activity.” While the decision has been rightly criticized for criminalizing humanitarian assistance and limiting political expression, it clearly requires that to qualify as material support any service must be provided in coordination with or at the direction of a designated group. Providing platform infrastructure to billions of users which may be used by some of them to discuss foreign terrorist groups wholly lacks the necessary coordination element. Even allowing representation of designated groups or individuals is unlikely to create material support liability: Twitter, for example, has accounts registered to Taliban officials and allowed Hamas and Hezbollah accounts for years. The U.S. government has not attempted to charge the company with any crime related to these accounts.

Nor do the platforms face any serious threat of civil liability. Section 230 of the Communications Decency Act uniformly blocks private parties’ efforts to hold platforms accountable for third-party content under a material support theory. In June, for example, the Ninth Circuit held that Section 230 barred material support claims against Facebook and Twitter because neither site had made a “material contribution” to the development of purported terrorist content that was posted on their platforms, but rather supplied “neutral tools” that third parties had used to post their own content. This decision follows similar outcomes in the Second Circuit and several lower courts, which, along with dismissals on other grounds like lack of causation, suggest a judicial consensus against allowing these claims to proceed. And while current proposals to reform Section 230 may erode this blanket protection by authorizing civil liability for terrorist content, litigants would still face a high bar in establishing that social media companies coordinated with terrorist groups and proximately caused offline violence.

The Bottom Line

Content moderation is no easy task and requires companies to balance various competing interests. Any approach the company takes will likely draw both supporters and critics. But in order to have an honest dialogue about these tradeoffs and who bears the brunt of its content moderation decisions, Facebook needs to set aside the distracting fiction that U.S. law requires its current approach.

*On Oct. 28, Facebook announced that the Facebook company, which owns the social media platforms Facebook, Instagram, and WhatsApp, among others, would be renamed Meta. The social media site Facebook will continue to operate under the name. We have retained the usage of “Facebook” to refer to both the company and the social media site