Skip Navigation
Report

Double Standards in Social Media Content Moderation

Summary: Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.

Published: August 4, 2021
Double Standards in Social Media Content Moderation illustration
BCJ/Andy Clement/Getty

Social media plays an import­ant role in build­ing community and connect­ing people with the wider world. At the same time, the private rules that govern access to this service can result in diver­gent exper­i­ences across differ­ent popu­la­tions. While social media compan­ies dress their content moder­a­tion policies in the language of human rights, their actions are largely driven by busi­ness prior­it­ies, the threat of govern­ment regu­la­tion, and outside pres­sure from the public and the main­stream media. foot­note1_oedk­sco 1 Chin­mayi Arun, “Face­book’s Faces,” Harvard Law Review 135 (forth­com­ing), https://ssrn.com/abstract=3805210. As a result, the veneer of a rule-based system actu­ally conceals a cascade of discre­tion­ary decisions. Where plat­forms are look­ing to drive growth or facil­it­ate a favor­able regu­lat­ory envir­on­ment, content moder­a­tion policy is often either an after­thought or a tool employed to curry favor. foot­note2_ockdxnd 2 See, e.g., Newley Purnell and Jeff Horwitz, “Face­book’s Hate-Speech Rules Collide with Indian Polit­ics,” Wall Street Journal, August 14, 2020, https://www.wsj.com/articles/face­book-hate-speech-india-polit­ics-muslim-hindu-modi-zuck­er­berg-11597423346. All too often, the view­points of communit­ies of color, women, LGBTQ+ communit­ies, and reli­gious minor­it­ies are at risk of over-enforce­ment, while harms target­ing them often remain unad­dressed.

This report demon­strates the impact of content moder­a­tion by analyz­ing the policies and prac­tices of three plat­forms: Face­book, YouTube, and Twit­ter. foot­note3_pptux92 3 Face­book’s Community Guidelines also cover Instagram, as the Instagram Community Guidelines regu­larly link to and incor­por­ate Face­book’s rules regard­ing hate speech, bully­ing and harass­ment, viol­ence and incite­ment, and danger­ous organ­iz­a­tions and indi­vidu­als (among others). See Instagram Community Guidelines, accessed July 6, 2021, https://help.instagram.com/477434105621119?ref=ig-tos. This report does not address altern­at­ive content moder­a­tion models such as community moder­a­tion, which have had compar­at­ive success at a smal­ler scale on plat­forms like Reddit. We selec­ted these plat­forms because they are the largest and the focus of most regu­lat­ory efforts and because they tend to influ­ence the prac­tices adop­ted by other plat­forms. Our eval­u­ation compares plat­form policies regard­ing terror­ist content (which often constrict Muslims’ speech) to those on hate speech and harass­ment (which can affect the speech of power­ful constitu­en­cies), along with publicly avail­able inform­a­tion about enforce­ment of those policies. foot­note4_g4ot­pb2 4 See, e.g., See Joseph Cox and Jason Koebler, “Why Won’t Twit­ter Treat White Suprem­acy Like ISIS? Because It Would Mean Banning Some Repub­lican Politi­cians Too,” Vice, August 25, 2019, https://www.vice.com/en/article/a3xgq5/why-wont-twit­ter-treat-white-suprem­acy-like-isis-because-it-would-mean-banning-some-repub­lican-politi­cians-too (Describ­ing the state­ment of a tech­nical employee at Twit­ter who works on machine learn­ing and arti­fi­cial intel­li­gence (AI) issues noting at an all-hands meet­ing on March 22, 2019: “With every sort of content filter, there is a tradeoff, he explained. When a plat­form aggress­ively enforces against ISIS content, for instance, it can also flag inno­cent accounts as well, such as Arabic language broad­casters. Soci­ety, in general, accepts the bene­fit of banning ISIS for incon­veni­en­cing some others, he said. In separ­ate discus­sions veri­fied by Mother­board, that employee said Twit­ter has not taken the same aggress­ive approach to white suprem­acist content because the collat­eral accounts that are impacted can, in some instances, be Repub­lican politi­cians.”).

In section I, we analyze the policies them­selves, show­ing that despite their ever-increas­ing detail, they are draf­ted in a manner that leaves margin­al­ized groups under constant threat of removal for everything from discuss­ing current events to call­ing out attacks against their communit­ies. At the same time, the rules are craf­ted narrowly to protect power­ful groups and influ­en­tial accounts that can be the main drivers of online and offline harms.

Section II assesses the effects of enforce­ment. Although publicly avail­able inform­a­tion is limited, we show that content moder­a­tion at times results in mass take­downs of speech from margin­al­ized groups, while more domin­ant indi­vidu­als and groups bene­fit from more nuanced approaches like warn­ing labels or tempor­ary demon­et­iz­a­tion. Section II also discusses the current regimes for rank­ing and recom­mend­a­tion engines, user appeals, and trans­par­ency reports. These regimes are largely opaque and often deployed by plat­forms in self-serving ways that can conceal the harm­ful effects of their policies and prac­tices on margin­al­ized communit­ies. In eval­u­at­ing impact, our report relies primar­ily on user reports, civil soci­ety research, and invest­ig­at­ive journ­al­ism because the plat­forms’ tight grip on inform­a­tion veils answers to systemic ques­tions about the prac­tical rami­fic­a­tions of plat­form policies and prac­tices.

Section III concludes with a series of recom­mend­a­tions. We propose two legis­lat­ive reforms, each focused on break­ing the black box of content moder­a­tion that renders almost everything we know a product of the inform­a­tion that the compan­ies choose to share. First, we propose a frame­work for legally mandated trans­par­ency require­ments, expan­ded beyond stat­ist­ics on the amount of content removed to include more inform­a­tion on the targets of hate speech and harass­ment, on govern­ment involve­ment in content moder­a­tion, and on the applic­a­tion of inter­me­di­ate penal­ties such as demon­et­iz­a­tion. Second, we recom­mend that Congress estab­lish a commis­sion to consider a privacy-protect­ive frame­work for facil­it­at­ing inde­pend­ent research using plat­form data, as well as protec­tions for the journ­al­ists and whis­tleblowers who play an essen­tial role in expos­ing how plat­forms use their power over speech. In turn, these frame­works will enable evid­ence-based regu­la­tion and remed­ies.

Finally, we propose a number of improve­ments to plat­form policies and prac­tices them­selves. We urge plat­forms to reori­ent their moder­a­tion approach to center the protec­tion of margin­al­ized communit­ies. Achiev­ing this goal will require a reas­sess­ment of the connec­tion between speech, power, and margin­al­iz­a­tion. For example, we recom­mend address­ing the increased poten­tial of public figures to drive online and offline harms. We also recom­mend further disclos­ures regard­ing the govern­ment’s role in removals, data shar­ing through public-private part­ner­ships, and the iden­tit­ies of groups covered under the rules relat­ing to “terror­ist” speech.

End Notes