Skip Navigation
Analysis

Facebook’s Content Moderation Rules Are a Mess

The Facebook Oversight Board must create more transparent content policies for the sake of its users and its powerful platform.

This origin­ally appeared in Just Secur­ity.

The Face­book Over­sight Board, in decid­ing its first cases, over­turned five out of six of the company’s decisions. While the board’s will­ing­ness to depart from its corpor­ate creat­or’s views is note­worthy, the bigger message is that Face­book’s content-moder­a­tion rules and its enforce­ment of them are a mess and the company needs to clean up its act.

Indeed, many of the issues raised by the board reflect long­stand­ing criti­cisms from civil soci­ety about Face­book’s content-moder­a­tion scheme, includ­ing the company’s use of auto­mated removal systems, its vague rules and unclear explan­a­tions of its decisions, and the need for propor­tion­ate enforce­ment. Face­book’s ongo­ing inab­il­ity to enact a clear, consist­ent, and trans­par­ent content-moder­a­tion policy may well lead the board to over­turn Face­book’s decision to bar former Pres­id­ent Donald Trump, a case that the company has volun­tar­ily brought to the board.

Unre­li­able Algorithms

Face­book’s removal of an Instagram post about breast cancer (which the company conceded was incor­rect) was used by the board as an oppor­tun­ity to express concerns about the company’s use of auto­ma­tion, as well as the sweep of its policy against nudity. Auto­mated removals have long been criti­cized as more suscept­ible to error than human review­ers. For example, in the context of Covid-19, algorithms mistakenly flagged posts of accur­ate health inform­a­tion as spam, while leav­ing up messages contain­ing conspir­acy theor­ies.

These mistakes partic­u­larly affect Face­book users outside west­ern coun­tries, since Face­book’s algorithms only work in certain languages and auto­mated tools often fail to adequately account for context or polit­ical, cultural, linguistic, and social differ­ences. Face­book’s treat­ment of female breasts as sexual imagery has also been a conten­tious issue for many years – even photos of breast­feed­ing moth­ers and mastec­tomy scars displayed by cancer surviv­ors were routinely removed until intense public pres­sure forced a policy change.

The board iden­ti­fied several harms result­ing from the company’s auto­mated enforce­ment of the nudity policy: inter­fer­ence with user expres­sion; dispro­por­tion­ate impact on female users (the company allows male nipples); and, given the import­ant public health goal of rais­ing aware­ness about breast cancer, women’s right to health. To ameli­or­ate these harms, the board recom­men­ded that Face­book improve its auto­mated detec­tion of images that contain text-over­lay (in this case, the algorithm had failed to recog­nize the words “Breast Cancer” that appeared as part of the image). The board also recom­men­ded the company audit a sample of auto­mated enforce­ment decisions to reverse and learn from mistakes, and include inform­a­tion on auto­mated removals in its trans­par­ency reports.

The Over­sight Board also made import­ant sugges­tions for ways that Face­book can improve the process avail­able to users whose posts are removed, includ­ing noti­fy­ing the user of the specific rule they have viol­ated and of the use of auto­ma­tion, and provid­ing users with the right to appeal to a human reviewer. Many of these sugges­tions are similar to those set out in the Santa Clara Prin­ciples, a civil soci­ety charter that in 2018 outlined minimum stand­ards for compan­ies engaged in content moder­a­tion.

Vague Rules

In another case, the board took aim at Face­book’s Danger­ous Indi­vidu­als and Organ­iz­a­tions Policy. It over­turned the removal of a post quot­ing Joseph Goebbels. The company had intern­ally desig­nated Goebbels as a danger­ous indi­vidual and the Nazi party as a hate organ­iz­a­tion. The under­ly­ing policy, however, was actu­ally developed as a response to calls from the U.S. and European govern­ments for social media compan­ies to do more to combat ISIS and al-Qaeda propa­ganda.

As the U.N. Special Rappor­teur for Coun­terter­ror­ism and Human Rights and vari­ous civil soci­ety groups have poin­ted out, the policy fails to identify all the groups and indi­vidu­als that the company considers danger­ous, but has had a near-exclus­ive focus on content related to ISIS and al-Qaeda, placing Muslim and Middle East­ern communit­ies and Arabic speak­ers at greater risk for over-removal. Moreover, if the company’s removals follow the pattern of the GIFCT consor­tium in which it parti­cip­ates, the vast major­ity of removals would be for the most ambigu­ous types of posts: those that “praise” or “support” a listed organ­iz­a­tion.

In decid­ing the case, the board focused on specif­ics about the post to reach the conclu­sion that it “did not support the Nazi party’s ideo­logy.” But it also found that Face­book’s policy on Danger­ous Indi­vidu­als and Organ­iz­a­tions failed to meet the inter­na­tional human rights require­ment that “rules restrict­ing expres­sion must be clear, precise and publicly access­ible.” The policy was not suffi­ciently “clear, precise, and publicly access­ible” because it did not explain the mean­ing of key terms such as “praise” and “support,” list the indi­vidu­als and organ­iz­a­tions that have been desig­nated as “danger­ous,” or make clear that Face­book requires users to affirm­at­ively spell out they are not prais­ing or support­ing a quote attrib­uted to a danger­ous indi­vidual. The board recom­men­ded that Face­book clarify the terms of its policy and publish a list of danger­ous organ­iz­a­tions and indi­vidu­als to close the “inform­a­tion gap” between the publicly avail­able text of the policy and the internal rules applied by Face­book’s content moder­at­ors.

The theme of lack of clar­ity was echoed in several of the cases discussed below as well. In the Covid-19 decision, the board found Face­book’s vague rules about misin­form­a­tion and immin­ent harm did not comply with human rights stand­ards because the “patch­work of policies found on differ­ent parts of Face­book’s website make it diffi­cult for users to under­stand what content is prohib­ited.”

Context is Key

In several cases, the board leaned on the geopol­it­ical context of a post to reach its decisions. These cases illus­trate that the board’s selec­tion of the relev­ant “context” to consider – which is not explained in its decisions – is often determ­in­at­ive of the outcome of the case, and the board has tended to view the relev­ant context more narrowly than has the company.

The board over­turned Face­book’s removal of a post from Myan­mar which poin­ted to “the lack of response by Muslims gener­ally to the treat­ment of Uyghur Muslims in China, compared to killings in response to cartoon depic­tions of the Prophet Muhammad in France” to argue that there is some­thing wrong with Muslims’ mind­set or psycho­logy. The company had acted under its hate speech policy, which prohib­its gener­al­ized state­ments of inferi­or­ity about a reli­gious group based on mental defi­cien­cies.

The board, however, concluded that state­ments refer­ring to Muslims as mentally unwell or psycho­lo­gic­ally unstable, while offens­ive, are “not a strong part” of the “common and some­times severe” anti-Muslim rhet­oric in Myan­mar. If the board had taken a wider view of context, it could well have reached the oppos­ite conclu­sion. Face­book’s fail­ure to control anti-Muslim hate speech in Myan­mar has been linked to the geno­cide of Rohingya Muslims in the coun­try, viol­ence that contin­ues to this day.

The board also over­turned Face­book’s removal of a post criti­ciz­ing the French govern­ment for refus­ing to author­ize the use of hydroxy­chloroquine, which the user called a “cure” for Covid-19. Because the drug is not avail­able in France without a prescrip­tion and the post does not encour­age people to buy or take drugs without a prescrip­tion, the board determ­ined that the post did not create a risk of immin­ent harm, as required by the viol­ence and incite­ment policy under which it was removed. Here again, if the board had looked at the broader issue of misin­form­a­tion around Covid-19, or even around hydroxy­chloroquine, it could well have reached the oppos­ite conclu­sion.

In a decision released on Febru­ary 12, 2021, the board over­turned Face­book’s decision to remove a post from India that the company had treated as a veiled threat prohib­ited under its viol­ence and incite­ment policy. The post from Octo­ber 2020, depict­ing a sheathed sword, said “if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath,” and the message included hasht­ags call­ing for the boycott of French products and call­ing Pres­id­ent Emmanuel Macron of France the devil.

For Face­book, the relev­ant context was “reli­gious tensions” in India related to the Charlie Hebdo trials occur­ring in France at the time of the post and elec­tions in the Indian state of Bihar, which were held from Octo­ber through Novem­ber, as well as rising viol­ence against Muslims and the possib­il­ity of retali­at­ory viol­ence by Muslims. A major­ity of the board looked at the same events, but with greater specificity: the protests in India follow­ing Macron’s state­ments were mostly nonvi­ol­ent and the elec­tions in Bihar were not marked by viol­ence against persons based on their reli­gion. Moreover, while the board viewed viol­ence against the Muslim minor­ity in India as “a press­ing concern,” it did not give the same weight to the prospect of “retali­at­ory viol­ence by Muslims.” Over­all, the major­ity inter­preted the refer­ences to the boycott of French products as a call to “non-viol­ent protest and part of discourse on current polit­ical events.”

Propor­tion­al­ity

The board also grappled with the issue of propor­tion­al­ity. In the Covid-19 case, it found that Face­book’s removal of the post was not propor­tion­ate because the company did not explain how removal consti­tuted the least intrus­ive means of protect­ing public health.

In another case, in which the board upheld Face­book’s removal of a racial slur that dehu­man­ized Azerbaijanis, the board split on the issue. The major­ity concluded that Face­book’s removal was propor­tion­ate because less severe inter­ven­tions, such as placing a label or a warn­ing screen on the post, would not have provided the same protec­tion against offline harms, the risk of which was partic­u­larly severe because of an ongo­ing armed conflict in the Nagorno-Kara­bakh region.

The minor­ity of the board – whose opin­ions were summar­ized in the decision – argued other­wise. One member thought the risk of viol­ence was relat­ively remote and, given that the removal of the post led to the take­down of speech on a matter of public concern, less-intrus­ive meas­ures should have been considered. Another member believed that the post would not contrib­ute to milit­ary or other viol­ent action. It is diffi­cult to distin­guish between the two cases, except perhaps on the basis of the type of harm at issue: the prospect of phys­ical viol­ence in the Azerbaijan case versus the more diffuse threat of disin­form­a­tion about Covid-19.

Implic­a­tions for the Trump case

Figur­ing out what these decisions mean for what is likely to be one of the board’s biggest cases – its review of Face­book’s decision to indef­in­itely suspend Donald Trump from the plat­form after remov­ing two missives he posted during the Jan. 6 riot at the U.S. Capitol – is like read­ing tea leaves. Both posts instruc­ted the rioters to “go home,” but also reit­er­ated Trump’s false asser­tions that the elec­tion had been “stolen from us” and “unce­re­mo­ni­ously viciously stripped away from great patri­ots who have been badly unfairly treated for so long.”

In the Goebbels decision, the board found that Face­book’s rules on Danger­ous Indi­vidu­als and Organ­iz­a­tions – the basis for remov­ing Trump’s posts – failed the inter­na­tional stand­ard of legal­ity. The board has also been soli­cit­ous of the need to avoid inter­fer­ing with public discourse (e.g., on govern­ment policy on Covid-19 or objec­tions to Macron’s treat­ment of Muslims in France). That concern is partic­u­larly weighty when discuss­ing the speech of the pres­id­ent of the United States.

Context, which has played such an import­ant role in the board’s decisions thus far, will undoubtedly be key. But in the case of Trump, it prob­ably will not matter whether the board looks at the long arc of his attempts to under­mine the elec­tion and rile up his follow­ers or only the events of Jan. 6. Both show the danger he posed.

Much is likely to hinge on how the board eval­u­ates the propor­tion­al­ity of Face­book’s decision with regard to the indef­in­ite suspen­sion. Aside from an outright ban, an indef­in­ite account suspen­sion is one of Face­book’s most severe enforce­ment tools, partic­u­larly when compared to post removals, labels, warn­ing screens, or other meas­ures it might take to reduce dissem­in­a­tion. And Face­book has not publicly explained the grounds on which it suspen­ded his account, except to say that the suspen­sion, when weighted against the values under­pin­ning its Community Stand­ards (voice, authen­ti­city, safety, privacy, and dignity), was “neces­sary and right” in order to prior­it­ize “safety in a period of civil unrest in the US with no set end date.” This seems thin grounds for a moment­ous decision, espe­cially when it is being reviewed by a board that has placed so much emphasis on the need for clear rules.

At the end of the day, though, as the Knight Insti­tute poin­ted out in its excel­lent submis­sion to the board, the bigger issue is not whether Trump was rightly kicked off Face­book, but about the company’s respons­ib­il­ity for its “decisions about design, which determ­ine which speech prolif­er­ates on Face­book’s plat­form, how quickly it spreads, who sees it, and in what contexts they see it.” Although the company has sought to exclude this issue from its juris­dic­tion, the board must push Face­book to address it. Other­wise it will just be address­ing the symp­toms of the prob­lem, not the cause.