This article originally appeared in Just Security
Facebook’s recent response to its Oversight Board’s decision on former President Donald Trump’s indefinite suspension from the platform made headlines by fixing the ban at two years and announcing the company would no longer treat statements by politicians as inherently newsworthy. The board subsequently congratulated Facebook on its commitment to “greater clarity, consistency and transparency in the way the company moderates content.”
But a deeper dive into the bevy of responses from Facebook – spread across its formal response, a blog post, and various interconnected policies – shows that while the company is willing to make some changes, it is not ready to engage with the most far-reaching recommendations from the board, which would have brought much-needed transparency to the platform’s operations and how its decisions about platform architecture shape global speech.
The Good: Additional, Albeit Imperfect, Transparency
While Facebook’s responses to the board’s calls for more transparency are sometimes incomplete, the company at least reveals more about its rules and enforcement than do other major platforms, such as YouTube and Twitter.
In response to the board’s recommendation, Facebook described the “strike system” it uses to impose penalties. Community Standards violations may incur a “strike” against the user’s account, which will expire after one year and will be visible to the user on their account status page. Nonetheless, the company retains significant discretion: the account “may” incur a strike and “not all” violations of the standards are subject to this system; some “severe” violations (e.g., posts involving child exploitation) may bypass the strike system and warrant immediate disabling of a user’s account; and certain violations may receive “additional, longer restrictions from certain features, on top of the standard restrictions.”
Much has been made of changes to Facebook’s policy on politicians. The company issued a formal policy on its newsworthiness exception, and said it “will no longer treat content from politicians as inherently of public interest.” That reversed its previous stance, which took a hands-off approach to the speech of politicians on the grounds that it was newsworthy.
But in the Trump case, Facebook claimed that it had never applied the exception to his posts, later stating that it had only done so on one occasion where Trump fat-shamed an attendee at one of his rallies. According to the company, the most controversial of Trump’s posts – such as “when the looting starts the shooting starts” and posts referring to COVID-19 as the “China plague” – were evaluated under a previously unknown “cross check system,” a kind of double-check through which Facebook said it elevated decisions about posts by high-reach accounts that were flagged for removal to a team of senior staffers, who generally allowed them remain on the platform.
These disclosures suggest that changes to the newsworthiness exception may be of lesser significance than they seem, but it is difficult to know because the interrelation and operation of these policies remain opaque. Facebook also declined to revisit its decision not to fact-check posts from politicians despite the board’s focus on Trump’s “unfounded narrative of electoral fraud.” This raises concerns that these accounts will continue to peddle misinformation of the type that many regard as playing a role in instigating the Jan. 6 attack on Congress.
Facebook also issued a new policy on suspending the accounts of public figures who contribute to a safety issue “during ongoing violence or civil unrest.” The policy includes set intervals for “reevaluating” whether the safety risk has passed. The new policy appears to formalize an existing practice, reflecting past decisions to block the military in Myanmar after it seized power in a coup and to ban QAnon groups leading up to the November 2020 U.S. presidential election. But it leaves open many questions. The criteria for deciding who constitutes a “public figure” based on news coverage or what constitutes “civil unrest” are not specified, allowing the company significant discretion in using this tool. The policy also does not address other measures Facebook may take during civil unrest that have long been unclear, such as actions it takes when it determines that a location is “temporarily high risk.”
To be sure, Facebook’s written policies will not be able to cover every situation and case, and the company’s documentation of some objective standards is encouraging. But more fine-tuning of the current policies could go a long way to generating public trust in the platform’s content-moderation system.
The Bad: Refusing to Provide Information
Facebook refused to provide meaningful insight into the operation of its cross check policy, which applies to high-reach accounts that – as the board pointed out – have great potential to instigate harm. The board’s request for the “standards” and “processes” used during the cross check process were met with boilerplate that failed to identify any meaningful principles governing its use or even which personnel are involved in the process. Facebook also declined to publish error rates for the cross check policy, arguing that it would be “infeasible” to do so because the process applies to only a “small number” of posts and does not map neatly onto its “existing measurement accuracy systems.” This explanation makes little sense: if cross checks are applied to only a small number of posts, what precludes Facebook from reporting on the number of posts subject to the review, the rate at which the cross checks resulted in a reversal of the original determination, and any subsequent decisions to remove posts initially approved through cross checks? All of this suggests that Facebook wants to keep this system by which it decides on the accounts of powerful figures under wraps, an especially disturbing position in light of reports that its leaders have stepped in to exempt influential figures from its rules.
Likewise, Facebook did not commit to provide in its annual transparency reports a more detailed geographic breakdown of enforcement actions, which would cover “numbers of profile, page, and account restrictions, including the reason and manner in which enforcement action was taken.” In its response to the board’s recommendation, the company cited only the difficulty in some instances of assigning definitive region and country locations to users who may post while traveling or use technology like a VPN to obscure their locations. It seems clear Facebook could report the data it has, noting the potential that these types of circumstances may result in slight distortions. And in any case, these issues have no bearing on reporting platform-wide data on numbers of profile, page, and account restrictions, which Facebook did not commit to do.
The Ugly: Avoiding Hard Questions
On two of the most sensitive aspects of the Trump case, Facebook simply decided to avoid the issue.
Foremost among the board’s recommendations was the suggestion that Facebook should examine and issue a public report on the ways in which the platform’s design, policy, and advertising decisions may have contributed to the Jan. 6 attack on the U.S. Capitol. The company declined to do so, instead noting that it would make data covering the weeks before and after the attack available to its preexisting research partners. By rebuffing the board’s suggestion that it grapple with systemic issues that cannot be effectively addressed by decisions on individual posts (even those of an influential figure like Trump), Facebook has made it clear that it is not ready to address fundamental issues.
On the sensitive topic of its informal interactions with governments, Facebook claimed that it was “fully implementing” the board’s recommendation to “resist pressure from governments to silence their political opposition.” This does not seem a fair assessment of its handling of this critical issue. The company has long faced charges that it bows to government pressure to block political foes. It most recently made headlines with reports that it blocked Palestinians sharing information about human rights violations by Israel and that it barred posts critical of Indian Prime Minister Narendra Modi. In response to the board’s recommendation in the Trump case, Facebook cited existing practices for reviewing official government requests to remove content which it reports, but did not address how it handles the informal pressure with which the board seemed most concerned.
Given the disparity between the volume of communications that Facebook handles and the limited number of cases the Oversight Board will be able to consider, the board’s true value lies in the power of its recommendations to instigate changes in Facebook’s rules and their enforcement. But if – as appears to be the case – the company is able to get away with ignoring recommendations on the most critical issues or falsely claiming it is “fully implementing” recommendations without incorporating any meaningful changes, the board’s efficacy and the company’s good faith will be severely compromised, and the adventure in accountability may quickly become irrelevant