Skip Navigation

So, What Does Facebook Take Down? The Secret List of ‘Dangerous’ Individuals and Organizations

The company needs to drop the fiction that U.S. law requires its current approach to content moderation.

Last Updated: November 8, 2021
Published: November 4, 2021
person sitting at a computer with facebook logo on the screen

This article origin­ally appeared at Just Secur­ity.

Over the past few weeks, the public and poli­cy­makers have grappled with leaks from a Face­book* whis­tleblower about the company’s decisions to allow hate speechmisin­form­a­tion, and other forms of “lawful but awful” content to prolif­er­ate. But amid this debate about what Face­book has allowed to remain on its plat­forms, new inform­a­tion has emerged about the content that the company routinely removes from the plat­form on the theory that it is legally required to do so. Because Face­book main­tains that these take­downs are legally mandated, it has erec­ted a wall around debates about those decisions. Its own Over­sight Board is barred from hear­ing cases where a decision to rein­state a post “could” lead to “crim­inal liab­il­ity” or “adverse govern­mental action.” This rule insu­lates from review decisions about how Face­book inter­prets its legal oblig­a­tions and how it eval­u­ates the risks of a coun­try’s reac­tion.

Recently, the Inter­cept published the lists of people and groups that Face­book deems “danger­ous,” a jumble of well-known terror­ist groups, street gangs, drug cartels, media organ­iz­a­tions, and far-right mili­tias that are covered under the company’s Danger­ous Indi­vidu­als and Organ­iz­a­tions (DIO) Community Stand­ard. Face­book itself had only disclosed the exist­ence of these lists and their over­all policy rationale, but not the specific content of the lists. Upon public­a­tion of the lists, Face­book defen­ded them with a famil­iar refrain; its depart­ing Director of Coun­terter­ror­ism and Danger­ous Organ­iz­a­tions stated that the company “has a legal oblig­a­tion to follow U.S. law related to entit­ies desig­nated as foreign terror­ist organ­iz­a­tions, global terror­ists and other sanc­tioned parties.”

In fact, Face­book’s lists include hundreds of groups and indi­vidu­als that have not been desig­nated or sanc­tioned by the U.S. govern­ment. Many of them are unsa­vory and under U.S. law Face­book is legally permit­ted to keep them off its plat­form, but no U.S. law requires it to do so.

Even for those groups that are drawn from U.S. govern­ment desig­na­tions, the ques­tion remains: does U.S. law require social media plat­forms to take down praise and posit­ive comments about groups and indi­vidu­als if the govern­ment desig­nates them as terror­ists? Despite the repeated asser­tions of Face­book and Instagram, the answer is no.

The issue has signi­fic­ance far beyond the United States too. Does Face­book, for example, consider itself bound by Israel’s recent, contro­ver­sial desig­na­tion of six human rights groups as terror­ist organ­iz­a­tions? Only when Face­book acknow­ledges that its decisions on who to include on its banned lists are its own can the company (and its peers, many of which also rely on similar lists) address the harms from this approach – a form of content moder­a­tion that is designed to capture a swath of speech that has little rela­tion­ship to real world harms.

Face­book’s DIO Lists

Since at least 2018, Face­book – like other social media plat­forms – has held secret lists of groups and indi­vidu­als that it labeled “danger­ous” about which it restricts discus­sion and content. In June 2021, after a decision from its Over­sight Board, Face­book publicly explained that its DIO lists are divided into three tiers. It removes praise, support, and repres­ent­a­tion of Tier 1 entit­ies, which it describes as those who “engage in seri­ous offline harms” such as “terror­ist, hate, and crim­inal organ­iz­a­tions” and the perpet­rat­ors of “terror­ist attacks, hate events, multiple-victim viol­ence or attemp­ted multiple-victim viol­ence, multiple murders, or hate crimes.” It permits posts condemning Tier 1 and since Febru­ary 2020 has claimed to allow “neut­ral[] discuss[ion]” that is obvi­ous from the post’s context. Tier 2, made up of what Face­book calls “Viol­ent Non-State Actors,” encom­passes “[e]ntit­ies that engage in viol­ence against state or milit­ary actors but do not gener­ally target civil­ians.” For these, Face­book removes support and praise of any viol­ence they commit, but it permits praise of their non-viol­ent efforts. Finally, Tier 3 is categor­ized as “Milit­ar­ized Social Move­ments, Viol­ence-Indu­cing Conspir­acy Networks, and Hate Banned Entit­ies,” which repeatedly viol­ate Face­book Stand­ards on or off the plat­form, “but have not neces­sar­ily engaged in viol­ence to date or advoc­ated for viol­ence against others based on their protec­ted char­ac­ter­ist­ics.”

Until the recent report­ing by the Inter­cept, the public only knew about parts of Face­book’s lists. The company has routinely indic­ated in its trans­par­ency reports how many Al Qaeda/ISIS-related posts it takes down and has more recently provided similar numbers for organ­ized hate removals. Perhaps unsur­pris­ingly, the Inter­cept’s report­ing revealed that Face­book’s terror­ism list – like the U.S. govern­ment’s lists of desig­nated Foreign Terror­ist Organ­iz­a­tions (FTOs) and Specially Desig­nated Terror­ist Groups (SDGTs) – is domin­ated by Muslim, Arab, and South Asian groups and indi­vidu­als. By contrast, despite the known and grow­ing threat of viol­ence posed by far right groups, the company largely places them in its lower-scru­tiny Tier 3. This emphasis leads to unequal poli­cing of content from these groups while, at the same time ignor­ing hate speech and harass­ment target­ing Muslims and other margin­al­ized communit­ies.

Apply­ing these rules also has real consequences for polit­ical speech. For example, as Jill­ian C. York points out, “People living in locales where so-called terror­ist groups play a role in governance need to be able to discuss those groups with nuance, and Face­book’s policy does­n’t allow for that.” U.S. desig­na­tions can also outlast local real­it­ies, caus­ing Face­book to limit speech about groups that have forsworn viol­ence. For example, Colom­bi­a’s FARC, which signed peace accords in 2016, remains desig­nated by both the U.S. govern­ment and Face­book. Afgh­anistan’s now-ruling Taliban has also prom­ised peace – and has reportedly earned human­it­arian aid from the U.S. govern­ment in return – but still cannot be discussed openly on the plat­form. And the lists published by the Inter­cept include several groups char­ac­ter­ized as Houthis or their affil­i­ates, despite the Biden Admin­is­tra­tion’s removal of the group’s FTO desig­na­tion in Febru­ary 2021.

Face­book guid­ance (previ­ously repor­ted by the Guard­ian) on how to apply its content moder­a­tion stand­ards also released by the Inter­cept shows that the touch­stone for its decisions is not neces­sar­ily viol­ence, but viol­ence by certain groups, in certain circum­stances. The guid­ance instructs moder­at­ors to allow calls for viol­ence against loca­tions “no smal­ler than a village,” or for “cruel or unusual punish­ment” for crimes Face­book recog­nizes. In a comment to the Inter­cept, Face­book’s spokes­per­son provided an example of the latter: “We should kill Osama bin Laden.” Setting aside the ques­tion of where best to draw the line in moder­at­ing content that advoc­ates viol­ence, these examples provide a window into what kinds of viol­ence (and calls for viol­ence) Face­book considers protec­ted polit­ical speech and what kinds it considers too danger­ous to host. It is worth noting, for example, that Osama bin Laden was “smal­ler than a village.”

Debunk­ing Face­book’s Claim of Legal Oblig­a­tion

While social media plat­forms rarely artic­u­late their alleged legal oblig­a­tions with specificity, the likely concern is a set of U.S. laws that prohibit provid­ing “mater­ial support” to foreign terror­ist groups, such as FTOs and SDGTs. These laws are broadly draf­ted, to be sure: 18 U.S.C. § 2339B prohib­its “know­ingly provid­ing mater­ial support to foreign terror­ist organ­iz­a­tions,” and a compan­ion provi­sion defines mater­ial support to include “any . . . service.” U.S. prosec­utors have deployed these laws aggress­ively against Amer­ican Muslims, with online speech making up much of the evid­ence in some cases.

But Face­book is not a govern­ment respons­ible for enfor­cing mater­ial support stat­utes and is itself at negli­gible risk of being prosec­uted under these laws, both because its conduct does­n’t meet the legal stand­ards and because it enjoys tremend­ous polit­ical protec­tion relat­ive to the ordin­ary user.

In its 2010 decision in Holder v. Human­it­arian Law Project, the Supreme Court upheld 18 U.S.C. § 2339B in the face of a First Amend­ment chal­lenge, find­ing that even support of the peace­ful activ­it­ies of a desig­nated group could be prosec­uted under the stat­ute on the theory that these contri­bu­tions would free up resources for “the group’s viol­ent activ­ity.” While the decision has been rightly criti­cized for crim­in­al­iz­ing human­it­arian assist­ance and limit­ing polit­ical expres­sion, it clearly requires that to qual­ify as mater­ial support any service must be provided in coordin­a­tion with or at the direc­tion of a desig­nated group. Provid­ing plat­form infra­struc­ture to billions of users which may be used by some of them to discuss foreign terror­ist groups wholly lacks the neces­sary coordin­a­tion element. Even allow­ing repres­ent­a­tion of desig­nated groups or indi­vidu­als is unlikely to create mater­ial support liab­il­ity: Twit­ter, for example, has accounts registered to Taliban offi­cials and allowed Hamas and Hezbol­lah accounts for years. The U.S. govern­ment has not attemp­ted to charge the company with any crime related to these accounts.

Nor do the plat­forms face any seri­ous threat of civil liab­il­ity. Section 230 of the Commu­nic­a­tions Decency Act uniformly blocks private parties’ efforts to hold plat­forms account­able for third-party content under a mater­ial support theory. In June, for example, the Ninth Circuit held that Section 230 barred mater­ial support claims against Face­book and Twit­ter because neither site had made a “mater­ial contri­bu­tion” to the devel­op­ment of purpor­ted terror­ist content that was posted on their plat­forms, but rather supplied “neut­ral tools” that third parties had used to post their own content. This decision follows similar outcomes in the Second Circuit and several lower courts, which, along with dismissals on other grounds like lack of caus­a­tion, suggest a judi­cial consensus against allow­ing these claims to proceed. And while current propos­als to reform Section 230 may erode this blanket protec­tion by author­iz­ing civil liab­il­ity for terror­ist content, litig­ants would still face a high bar in estab­lish­ing that social media compan­ies coordin­ated with terror­ist groups and prox­im­ately caused offline viol­ence.

The Bottom Line

Content moder­a­tion is no easy task and requires compan­ies to balance vari­ous compet­ing interests. Any approach the company takes will likely draw both support­ers and crit­ics. But in order to have an honest dialogue about these tradeoffs and who bears the brunt of its content moder­a­tion decisions, Face­book needs to set aside the distract­ing fiction that U.S. law requires its current approach.

*On Oct. 28, Face­book announced that the Face­book company, which owns the social media plat­forms Face­book, Instagram, and What­s­App, among others, would be renamed Meta. The social media site Face­book will continue to oper­ate under the name. We have retained the usage of “Face­book” to refer to both the company and the social media site