Skip Navigation
Analysis

Facebook’s New ‘Dangerous Individuals and Organizations’ Policy Brings More Questions Than Answers

The platform needs to provide better rules, transparency, and enforcement.

This piece origin­ally appeared in Just Secur­ity

Four of the Face­book Over­sight Board’s 13 decisions so far have taken aim at the plat­form’s Danger­ous Indi­vidu­als and Organ­iz­a­tions Community Stand­ard (DIO Stand­ard). The policy has long been criti­cized by civil soci­ety for being opaque, over­broad, and target­ing polit­ical speech by Muslim users – such as posts by activ­ists in Kash­mir and Palestine, and comment­ary on the U.S. drone strike on Iranian offi­cial Qassem Solei­mani. The company has respon­ded with a series of clari­fic­a­tions and policy revi­sions, but the rules require a funda­mental rethink and far more trans­par­ency about enforce­ment, includ­ing govern­mental pres­sure.

Target­ing Terror­ist Content

In 2015, as U.S. and European govern­ments became concerned about ISIS’s facil­ity with social media and its abil­ity to attract Muslims from the United States and Europe to its cause, the Obama admin­is­tra­tion and its European coun­ter­parts began press­ing social media plat­forms to take action. While initially arguing that there was no “magic algorithm” for identi­fy­ing “terror­ist content,” the major compan­ies quickly came around, and by 2016 Face­book and Twit­ter were high­light­ing how many ISIS accounts and posts they had removed. That year, Face­book, Microsoft, and Twit­ter also announced that they would create a shared data­base of the digital finger­prints of terror­ist videos and images, which evolved into the Global Inter­net Forum to Counter Terror­ism.

Often these actions take place under Face­book’s DIO Stand­ard, which prohib­its “repres­ent­a­tion” of and “praise” and “support” for “danger­ous indi­vidu­als and organ­iz­a­tions.” The board’s criti­cisms of the stand­ard have focused on lack of clar­ity about the mean­ing of these terms and it has called on the company to identify the groups and indi­vidu­als it considers to be “danger­ous.”

The board grappled with the DIO Stand­ard in one of its first decisions, an appeal of the company’s removal of a post that quoted Joseph Goebbels. While the user did not provide any comment­ary on the quote, comments on the post sugges­ted it “sought to compare the pres­id­ency of Donald Trump to the Nazi regime.” The board sharply criti­cized the DIO Stand­ard for fail­ing to define key terms, partic­u­larly what consti­tuted “praise” and “support.” Among other things, Face­book had not made it clear to users that they must actively disavow a quote attrib­uted to a danger­ous indi­vidual in any post nor had it made public the indi­vidu­als or organ­iz­a­tions it had deemed “danger­ous.”

Again when it considered Face­book’s dele­tion of a video that praised Indian farm­ers protest­ing against Indi­a’s ruling polit­ical party, the Bhar­atiya Janata Party (the BJP), and the Hindu nation­al­ist organ­iz­a­tion Rashtriya Swayam­sevak Sangh (RSS), the board reit­er­ated its concern that key terms (such as praise and support) in the DIO Stand­ard remained undefined, and that the stand­ard was not avail­able to users in Punj­abi. The board also raised ques­tions about whether Indian offi­cials had leaned on Face­book to remove “content around the farm­er’s protests, content crit­ical of the govern­ment over its treat­ment of farm­ers, or content concern­ing the protests.” But it was stymied in address­ing the issue because Face­book refused to provide inform­a­tion about its commu­nic­a­tions with Indian offi­cials.

Only in its decision on the suspen­sion of Donald Trump’s account did the board uphold a Face­book decision under the DIO Stand­ard. The board determ­ined that the suspen­sion of Trump’s account in wake of the Jan. 6 attack on the Capitol by his support­ers was consist­ent with the DIO Stand­ard because Face­book had desig­nated the attack a “viol­ent event” and Trump’s comments – such as “We love you. You’re very special.” – amoun­ted to praise of the event and its perpet­rat­ors. Ongo­ing viol­ence, risk of further viol­ence, the size of Trump’s audi­ence, and his influ­ence as head of state justi­fied the impos­i­tion of a suspen­sion. As we detailed in a previ­ous Just Secur­ity piece, the board declined to comment on the DIO Stand­ard’s lack of criteria for viol­ent events that fall within its scope.

Face­book’s Response

On June 23, Face­book updated its DIO Stand­ard. The new stand­ard creates three tiers of danger­ous organ­iz­a­tions, levels that are tied primar­ily to the degree of harm the company attrib­utes to each, with viol­ence as the touch­stone and greater restric­tions placed on groups that engage in actual offline viol­ence.

Tier 1 covers groups that engage in terror­ism, organ­ized hate, large-scale crim­inal activ­ity, mass and multiple murder­ers” as well as “viol­at­ing viol­ent events.” These are groups that impose “seri­ous offline harms” by organ­iz­ing or advoc­at­ing for viol­ence against civil­ians, repeatedly dehu­man­iz­ing or advoc­at­ing for harm against people based on protec­ted char­ac­ter­ist­ics, or enga­ging in system­atic crim­inal oper­a­tions.”

Tier 2, “Viol­ent Non-State Actors,” consists of “[e]ntit­ies that engage in viol­ence against state or milit­ary actors but do not gener­ally target civil­ians.” Tier 3 consists of groups that routinely viol­ate Face­book’s Hate Speech or DIO Stand­ards on or off the plat­form, “but have not neces­sar­ily engaged in viol­ence to date or advoc­ated for viol­ence against others based on their protec­ted char­ac­ter­ist­ics.” Examples include “Milit­ar­ized Social Move­ments, Viol­ence-Indu­cing Conspir­acy Networks, and Hate Banned Entit­ies,” which the DIO Stand­ard now defines.

Face­book treats speech from and about Tier 1 groups most severely, remov­ing praise, support, and repres­ent­a­tion of the groups. This would cover “speak­ing posit­ively” about them, “legit­im­iz­ing” their cause, or “align­ing oneself ideo­lo­gic­ally.” This seems a continu­ation of the company’s current policy under which, for example, a post about al-Qaeda that argues that the group’s object­ive of remov­ing foreign troops from Saudi Arabia was justi­fied would be forbid­den. Face­book also tries to create an excep­tion for posts that “report on, condemn, or neut­rally discuss” Tier 1 groups and their activ­it­ies. As is already the case, news reports of al-Qaeda’s goals are covered by this, but it is less clear how the neut­ral­ity of an indi­vidual user’s comments would be eval­u­ated. Until now, as the Goebbels case indic­ated, Face­book required a user to disavow the group to protect content from removal. It is not clear whether this is still required under the neut­ral discus­sion excep­tion.

For Tier 2 groups, Face­book removes support for the groups and praise of any viol­ent acts, but not praise of their non-viol­ent actions. For example, social programs or human rights issues suppor­ted by a viol­ent non-state actor could be praised, while its viol­ent clashes with govern­ment offi­cials or advocacy of viol­ent over­throw could not. For Tier 3, Face­book removes repres­ent­a­tion only, permit­ting praise and support. Thus, it seems that QAnon cannot have a Face­book page or event, but users can praise it and call on their friends to support the move­ment with no fear of sanc­tion.

Stuck in the Middle?

Despite the extens­ive changes to the DIO Stand­ard, the board signaled in its most recent decision published July 10 – in unusu­ally strong language – that the changes made to date insuf­fi­ciently address the board’s recur­ring concerns.

The board determ­ined that Face­book wrongly removed a post encour­aging discus­sion of the solit­ary confine­ment of a leader of the Kurdistan Work­ers’ Party (PKK), a group that has used viol­ence in support of its goal of Kurd­ish seces­sion from Turkey. The board criti­cized Face­book for purportedly mispla­cing internal guid­ance that was supposed to allow content discuss­ing confine­ment condi­tions of indi­vidu­als on the DIO list. It reit­er­ated its criti­cism that, without public­a­tion of this and any similar excep­tions, the terms “praise” and “support” remain diffi­cult for users to under­stand. In its policy recom­mend­a­tions, the board showed a new will­ing­ness to dictate policy substance to Face­book, artic­u­lat­ing specific categor­ies of speech, defined in detail, that should be protec­ted from removal under the DIO Stand­ard: discus­sion of rights protec­ted by United Nations human rights conven­tions, discus­sions on alleg­a­tions of human rights viol­a­tions, and calls for account­ab­il­ity for human rights viol­a­tions and abuses.

The board also expressed continu­ing concern that govern­ments may be able to use the DIO Stand­ard to suppress legit­im­ate user content that criti­cizes govern­ment actions. It noted that while Face­book reports the number of requests from govern­ment offi­cials to take down content that viol­ates local law, it does not report on requests by govern­ment offi­cials to remove content for purpor­ted viol­a­tions of Face­book’s Community Stand­ards. The board recom­men­ded that this inform­a­tion be provided to users whose content is removed at a govern­ment’s request, as well as made public in aggreg­ate numbers in the company’s trans­par­ency reports.

This aspect of the board’s recom­mend­a­tion high­lighted a signi­fic­ant gap in Face­book’s trans­par­ency reports. Although govern­ment requests to take down content are eval­u­ated first under Face­book’s own stand­ards and second – if there is no policy viol­a­tion – under local law, Face­book only reports on removals that occur at the second level. This hair-split­ting exer­cise poten­tially obscures a tremend­ous volume of govern­ment requests to remove content from public view.

What’s Next?

It is obvi­ous that Face­book needs to do more to respond to this series of cases.

First, terms like “praise” and “support” are over­broad and likely suppress signi­fic­ant polit­ical speech. A narrower rule against prais­ing viol­ence (like Twit­ter’s) would be simpler to admin­is­ter and easier for users to under­stand, and give more scope to polit­ical speech in keep­ing with the company’s stated commit­ment to “voice.” By focus­ing on the content of posts, considered in appro­pri­ate context, rather than groups or indi­vidu­als they refer­ence, Face­book may also be able to avoid reli­ance on (and disclos­ure of) a list of banned groups and indi­vidu­als.

Second, the company needs to do better at ensur­ing that its rules and processes are under­stood by users and consist­ently applied. In the Trump case, Face­book’s initial conten­tion that the former pres­id­ent had not – prior to Jan. 6, 2021 – viol­ated any of its community stand­ards was widely considered as disin­genu­ous. Its subsequent discov­ery that Trump had in fact viol­ated its rule against harass­ment by fat-sham­ing an attendee at one of his rallies hardly helped matters. In the most recent case, Face­book claimed that its internal guid­ance protect­ing discus­sions of human rights viol­a­tions of groups and indi­vidu­als covered by its DIO Stand­ard was myster­i­ously lost. These incid­ents hardly inspire confid­ence that the company knows what it is doing when it removes posts.

Finally, Face­book must come clean about removals that are initi­ated by govern­ments but carried out under its Community Stand­ards. By refus­ing to provide inform­a­tion about such inter­ac­tions with govern­ments, the company has thus far effect­ively hidden the scope of govern­ment influ­ence on its enforce­ment of content moder­a­tion. This signi­fic­ant loop­hole, coupled with Face­book’s repeated refusal to answer the board’s ques­tions about govern­ment requests, enables govern­ments to exploit Face­book’s Community Stand­ards to quash public dissent.

As the board and civil soci­ety groups (like the Bren­nan Center where we both work) continue to push Face­book for more inform­a­tion and better rules and enforce­ment, it is worth noting that the company provides more inform­a­tion about removals than its peer plat­forms. They all need to do better.