Skip Navigation
Analysis

A Call for Legislated Transparency of Facebook’s Content Moderation

Facebook lied about whitelisting influential users who violate its rules, demonstrating again why it cannot be trusted to self-report on how it moderates content.

September 28, 2021
facebook
JOSH EDELSON / Getty

Face­book has a separ­ate, unequally permissive content moder­a­tion scheme that it applies only to high-profile users, and the company has lied about it for years. A recent Wall Street Journal exposé on Face­book’s “Cross Check” or “XCheck” program provides yet another example of why social media plat­forms cannot be trus­ted to self-report and self-regu­late. Inde­pend­ent over­sight is needed.

For those of us who have spent years compar­ing what Face­book says to what it does, it was unsur­pris­ing to learn that the company has a get-out-of-jail-free card for high-profile users who viol­ate its community stand­ards. Our research shows that Face­book’s rules have been consist­ently designed and enforced to protect power­ful groups and influ­en­tial accounts.

Face­book has tried to portray itself as altru­istic, emphas­iz­ing its commit­ment to “giving people a voice, keep­ing people safe, and treat­ing people equit­ably.” However, the company’s actions are largely driven by busi­ness prior­it­ies: in internal docu­ments, Face­book recog­nized that anger­ing influ­en­tial users is “PR risky” and bad for busi­ness.

It was surpris­ing to learn (perhaps naively) the lengths that Face­book was will­ing to go to conceal the true nature of the Cross Check program — expli­citly mislead­ing the public and poten­tially under­min­ing its Over­sight Board, a multi­mil­lion-dollar bet on self-regu­la­tion.

In Face­book’s only blog post discuss­ing Cross Check before this year, the company claimed to have one set of rules for all users: “We want to make clear that we remove content from Face­book, no matter who posts it, when it viol­ates our stand­ards. There are no special protec­tions for any group. . . . To be clear, Cross Check­ing some­thing on Face­book does not protect the profile, Page or content from being removed. It is simply done to make sure our decision is correct.”

Face­book reit­er­ated this false­hood in state­ments to its Over­sight Board in Janu­ary 2021 during the board’s consid­er­a­tion of Pres­id­ent Trump’s suspen­sion from the plat­form, repres­ent­ing that the “same general rules apply” to all users. The company also asser­ted, in response to the board’s recom­mend­a­tion that it provide more inform­a­tion about the program, that greater trans­par­ency was “not feas­ible” because Cross Check is used in only a “small number” of cases. However, the Wall Street Journal repor­ted that Face­book applies Cross Check to millions of accounts, immun­iz­ing certain high-profile users from enforce­ment actions entirely and apply­ing more leni­ent penal­ties to others. The Over­sight Board announced it is review­ing whether Face­book was “fully forth­com­ing in its responses in rela­tion to cross-check, includ­ing the prac­tice of whitel­ist­ing.”

Over the last several years, Face­book has purpose­fully shif­ted public focus regard­ing its treat­ment of influ­en­tial users away from Cross Check and toward its news­wor­thi­ness policy. For example, Cross Check is not mentioned in any of the three public­a­tions from the civil rights audit the company commis­sioned, although some of the docu­ments do discuss the company’s news­wor­thi­ness policies. This has deprived the public, regu­lat­ors, and civil rights groups of the oppor­tun­ity to mean­ing­fully engage with the company on its policies for high-profile accounts.

The success of Face­book’s obfus­ca­tion is evid­enced by the fact that in the 7,656 published public comments the Over­sight Board received in rela­tion to Trump’s suspen­sion, some vari­ation of “news­wor­thi­ness” or “public figure” appears more than 300 times, while “XCheck” and “Cross Check” do not appear at all.

Although it is more trans­par­ent than other plat­forms, includ­ing YouTube, Face­book tightly controls what inform­a­tion it shares and who has access to it. The company has high­lighted its research part­ner­ships as evid­ence of its commit­ment to trans­par­ency, but recent devel­op­ments call Face­book’s dedic­a­tion to those part­ner­ships into ques­tion.

The New York Times recently repor­ted that Face­book provided research­ers study­ing misin­form­a­tion on the site with data that only covered about half of U.S. users — those with clear polit­ical posi­tions — rather than all users, as Face­book had claimed. Mean­while, thou­sands of posts about the Janu­ary 6 insur­rec­tion went miss­ing from Face­book’s CrowdTangle trans­par­ency tool, which research­ers use to track what users are saying.

While some incid­ents may be attrib­ut­able to error, Face­book has also delib­er­ately cut off access to some outside research­ers, making it diffi­cult to get inde­pend­ent accounts of how the plat­form is being used. Academ­ics study­ing polit­ical ads at NYU Ad Obser­vat­ory were suspen­ded from the plat­form in August in the wake of several damaging news stor­ies. Face­book also recently imple­men­ted changes to its news feed that make it more diffi­cult for watch­dogs to audit what is happen­ing on the site at scale. In April, Face­book dismantled the CrowdTangle team, spark­ing concerns that the tool may be discon­tin­ued in the future.

It is clear that Face­book cannot be trus­ted to self-report, but it is crit­ical that the public under­stand how the plat­form moder­ates content. Social media is the new public square — it’s where we share ideas, get news, connect with others, and discuss current events. Equit­able access to social media is vitally import­ant because it has so pervas­ively reshaped how we commu­nic­ate. Civil liber­ties, includ­ing free­dom of expres­sion and the right to assembly, are threatened when compan­ies like Face­book publicly espouse their commit­ment to free speech and equal­ity while secretly moder­at­ing online speech in a manner that rein­forces exist­ing power hier­arch­ies. It places vulner­able groups with little polit­ical power at risk of online and offline harms, includ­ing being silenced online due to hate speech, harass­ment, or over-removals, as well as doxing or viol­ence.

The current regime of self-report­ing does not suffi­ciently allow poli­cy­makers and the public to engage with plat­forms on their content moder­a­tion prac­tices. A recent Bren­nan Center report proposes a frame­work for legally mandated trans­par­ency require­ments. It also proposes that Congress estab­lish a commis­sion to consider a privacy-protect­ive frame­work for facil­it­at­ing inde­pend­ent research using plat­form data. Mixing govern­ment regu­la­tion and content moder­a­tion has its own pitfalls, as was recently under­scored by Brazilian Pres­id­ent Jair Bolson­aro’s effort to regu­late social media in the run-up to next year’s elec­tion. Nonethe­less, we believe it is vital to estab­lish access to reli­able, accur­ate plat­form data for public interest research­ers.

Content moder­a­tion is complic­ated and hard, but without reli­able inform­a­tion on it, poli­cy­makers and research­ers cannot propose mean­ing­ful regu­la­tion or solu­tions. We cannot rely on plat­forms to report their own activ­ity. Trans­par­ency must have the force of law, and it should extend beyond Face­book to all social media plat­forms.