Skip Navigation
Analysis

A Call for Legislated Transparency of Facebook’s Content Moderation

Facebook lied about whitelisting influential users who violate its rules, demonstrating again why it cannot be trusted to self-report on how it moderates content.

September 28, 2021

Facebook has a separate, unequally permissive content moderation scheme that it applies only to high-profile users, and the company has lied about it for years. A recent Wall Street Journal exposé on Facebook’s “Cross Check” or “XCheck” program provides yet another example of why social media platforms cannot be trusted to self-report and self-regulate. Independent oversight is needed.

For those of us who have spent years comparing what Facebook says to what it does, it was unsurprising to learn that the company has a get-out-of-jail-free card for high-profile users who violate its community standards. Our research shows that Facebook’s rules have been consistently designed and enforced to protect powerful groups and influential accounts.

Facebook has tried to portray itself as altruistic, emphasizing its commitment to “giving people a voice, keeping people safe, and treating people equitably.” However, the company’s actions are largely driven by business priorities: in internal documents, Facebook recognized that angering influential users is “PR risky” and bad for business.

It was surprising to learn (perhaps naively) the lengths that Facebook was willing to go to conceal the true nature of the Cross Check program — explicitly misleading the public and potentially undermining its Oversight Board, a multimillion-dollar bet on self-regulation.

In Facebook’s only blog post discussing Cross Check before this year, the company claimed to have one set of rules for all users: “We want to make clear that we remove content from Facebook, no matter who posts it, when it violates our standards. There are no special protections for any group. . . . To be clear, Cross Checking something on Facebook does not protect the profile, Page or content from being removed. It is simply done to make sure our decision is correct.”

Facebook reiterated this falsehood in statements to its Oversight Board in January 2021 during the board’s consideration of President Trump’s suspension from the platform, representing that the “same general rules apply” to all users. The company also asserted, in response to the board’s recommendation that it provide more information about the program, that greater transparency was “not feasible” because Cross Check is used in only a “small number” of cases. However, the Wall Street Journal reported that Facebook applies Cross Check to millions of accounts, immunizing certain high-profile users from enforcement actions entirely and applying more lenient penalties to others. The Oversight Board announced it is reviewing whether Facebook was “fully forthcoming in its responses in relation to cross-check, including the practice of whitelisting.”

Over the last several years, Facebook has purposefully shifted public focus regarding its treatment of influential users away from Cross Check and toward its newsworthiness policy. For example, Cross Check is not mentioned in any of the three publications from the civil rights audit the company commissioned, although some of the documents do discuss the company’s newsworthiness policies. This has deprived the public, regulators, and civil rights groups of the opportunity to meaningfully engage with the company on its policies for high-profile accounts.

The success of Facebook’s obfuscation is evidenced by the fact that in the 7,656 published public comments the Oversight Board received in relation to Trump’s suspension, some variation of “newsworthiness” or “public figure” appears more than 300 times, while “XCheck” and “Cross Check” do not appear at all.

Although it is more transparent than other platforms, including YouTube, Facebook tightly controls what information it shares and who has access to it. The company has highlighted its research partnerships as evidence of its commitment to transparency, but recent developments call Facebook’s dedication to those partnerships into question.

The New York Times recently reported that Facebook provided researchers studying misinformation on the site with data that only covered about half of U.S. users — those with clear political positions — rather than all users, as Facebook had claimed. Meanwhile, thousands of posts about the January 6 insurrection went missing from Facebook’s CrowdTangle transparency tool, which researchers use to track what users are saying.

While some incidents may be attributable to error, Facebook has also deliberately cut off access to some outside researchers, making it difficult to get independent accounts of how the platform is being used. Academics studying political ads at NYU Ad Observatory were suspended from the platform in August in the wake of several damaging news stories. Facebook also recently implemented changes to its news feed that make it more difficult for watchdogs to audit what is happening on the site at scale. In April, Facebook dismantled the CrowdTangle team, sparking concerns that the tool may be discontinued in the future.

It is clear that Facebook cannot be trusted to self-report, but it is critical that the public understand how the platform moderates content. Social media is the new public square — it’s where we share ideas, get news, connect with others, and discuss current events. Equitable access to social media is vitally important because it has so pervasively reshaped how we communicate. Civil liberties, including freedom of expression and the right to assembly, are threatened when companies like Facebook publicly espouse their commitment to free speech and equality while secretly moderating online speech in a manner that reinforces existing power hierarchies. It places vulnerable groups with little political power at risk of online and offline harms, including being silenced online due to hate speech, harassment, or over-removals, as well as doxing or violence.

The current regime of self-reporting does not sufficiently allow policymakers and the public to engage with platforms on their content moderation practices. A recent Brennan Center report proposes a framework for legally mandated transparency requirements. It also proposes that Congress establish a commission to consider a privacy-protective framework for facilitating independent research using platform data. Mixing government regulation and content moderation has its own pitfalls, as was recently underscored by Brazilian President Jair Bolsonaro’s effort to regulate social media in the run-up to next year’s election. Nonetheless, we believe it is vital to establish access to reliable, accurate platform data for public interest researchers.

Content moderation is complicated and hard, but without reliable information on it, policymakers and researchers cannot propose meaningful regulation or solutions. We cannot rely on platforms to report their own activity. Transparency must have the force of law, and it should extend beyond Facebook to all social media platforms.