Face­book’s recent response to its Over­sight Board’s decision on former Pres­id­ent Donald Trump’s indef­in­ite suspen­sion from the plat­form made head­lines by fixing the ban at two years and announ­cing the company would no longer treat state­ments by politi­cians as inher­ently news­worthy. The board subsequently congrat­u­lated Face­book on its commit­ment to “greater clar­ity, consist­ency and trans­par­ency in the way the company moder­ates content.”

But a deeper dive into the bevy of responses from Face­book – spread across its formal response, a blog post, and vari­ous inter­con­nec­ted policies – shows that while the company is will­ing to make some changes, it is not ready to engage with the most far-reach­ing recom­mend­a­tions from the board, which would have brought much-needed trans­par­ency to the plat­form’s oper­a­tions and how its decisions about plat­form archi­tec­ture shape global speech.

The Good: Addi­tional, Albeit Imper­fect, Trans­par­ency

While Face­book’s responses to the board’s calls for more trans­par­ency are some­times incom­plete, the company at least reveals more about its rules and enforce­ment than do other major plat­forms, such as YouTube and Twit­ter.

In response to the board’s recom­mend­a­tion, Face­book described the “strike system” it uses to impose penal­ties. Community Stand­ards viol­a­tions may incur a “strike” against the user’s account, which will expire after one year and will be visible to the user on their account status page. Nonethe­less, the company retains signi­fic­ant discre­tion: the account “may” incur a strike and “not all” viol­a­tions of the stand­ards are subject to this system; some “severe” viol­a­tions (e.g., posts involving child exploit­a­tion) may bypass the strike system and warrant imme­di­ate disabling of a user’s account; and certain viol­a­tions may receive “addi­tional, longer restric­tions from certain features, on top of the stand­ard restric­tions.”

Much has been made of changes to Face­book’s policy on politi­cians. The company issued a formal policy on its news­wor­thi­ness excep­tion, and said it “will no longer treat content from politi­cians as inher­ently of public interest.” That reversed its previ­ous stance, which took a hands-off approach to the speech of politi­cians on the grounds that it was news­worthy.

But in the Trump case, Face­book claimed that it had never applied the excep­tion to his posts, later stat­ing that it had only done so on one occa­sion where Trump fat-shamed an attendee at one of his rallies. Accord­ing to the company, the most contro­ver­sial of Trump’s posts – such as “when the loot­ing starts the shoot­ing starts” and posts refer­ring to COVID-19 as the “China plague” – were eval­u­ated under a previ­ously unknown “cross check system,” a kind of double-check through which Face­book said it elev­ated decisions about posts by high-reach accounts that were flagged for removal to a team of senior staffers, who gener­ally allowed them remain on the plat­form.

These disclos­ures suggest that changes to the news­wor­thi­ness excep­tion may be of lesser signi­fic­ance than they seem, but it is diffi­cult to know because the inter­re­la­tion and oper­a­tion of these policies remain opaque. Face­book also declined to revisit its decision not to fact-check posts from politi­cians despite the board’s focus on Trump’s “unfoun­ded narrat­ive of elect­oral fraud.” This raises concerns that these accounts will continue to peddle misin­form­a­tion of the type that many regard as play­ing a role in instig­at­ing the Jan. 6 attack on Congress.

Face­book also issued a new policy on suspend­ing the accounts of public figures who contrib­ute to a safety issue “during ongo­ing viol­ence or civil unrest.” The policy includes set inter­vals for “reevalu­at­ing” whether the safety risk has passed. The new policy appears to form­al­ize an exist­ing prac­tice, reflect­ing past decisions to block the milit­ary in Myan­mar after it seized power in a coup and to ban QAnon groups lead­ing up to the Novem­ber 2020 U.S. pres­id­en­tial elec­tion. But it leaves open many ques­tions. The criteria for decid­ing who consti­tutes a “public figure” based on news cover­age or what consti­tutes “civil unrest” are not specified, allow­ing the company signi­fic­ant discre­tion in using this tool. The policy also does not address other meas­ures Face­book may take during civil unrest that have long been unclear, such as actions it takes when it determ­ines that a loca­tion is “tempor­ar­ily high risk.”

To be sure, Face­book’s writ­ten policies will not be able to cover every situ­ation and case, and the company’s docu­ment­a­tion of some object­ive stand­ards is encour­aging. But more fine-tuning of the current policies could go a long way to gener­at­ing public trust in the plat­form’s content-moder­a­tion system.

The Bad: Refus­ing to Provide Inform­a­tion

Face­book refused to provide mean­ing­ful insight into the oper­a­tion of its cross check policy, which applies to high-reach accounts that – as the board poin­ted out – have great poten­tial to instig­ate harm. The board’s request for the “stand­ards” and “processes” used during the cross check process were met with boil­er­plate that failed to identify any mean­ing­ful prin­ciples govern­ing its use or even which person­nel are involved in the process. Face­book also declined to publish error rates for the cross check policy, arguing that it would be “infeas­ible” to do so because the process applies to only a “small number” of posts and does not map neatly onto its “exist­ing meas­ure­ment accur­acy systems.” This explan­a­tion makes little sense: if cross checks are applied to only a small number of posts, what precludes Face­book from report­ing on the number of posts subject to the review, the rate at which the cross checks resul­ted in a reversal of the original determ­in­a­tion, and any subsequent decisions to remove posts initially approved through cross checks? All of this suggests that Face­book wants to keep this system by which it decides on the accounts of power­ful figures under wraps, an espe­cially disturb­ing posi­tion in light of reports that its lead­ers have stepped in to exempt influ­en­tial figures from its rules.

Like­wise, Face­book did not commit to provide in its annual trans­par­ency reports a more detailed geographic break­down of enforce­ment actions, which would cover “numbers of profile, page, and account restric­tions, includ­ing the reason and manner in which enforce­ment action was taken.” In its response to the board’s recom­mend­a­tion, the company cited only the diffi­culty in some instances of assign­ing defin­it­ive region and coun­try loca­tions to users who may post while trav­el­ing or use tech­no­logy like a VPN to obscure their loca­tions. It seems clear Face­book could report the data it has, noting the poten­tial that these types of circum­stances may result in slight distor­tions. And in any case, these issues have no bear­ing on report­ing plat­form-wide data on numbers of profile, page, and account restric­tions, which Face­book did not commit to do.

The Ugly: Avoid­ing Hard Ques­tions

On two of the most sens­it­ive aspects of the Trump case, Face­book simply decided to avoid the issue.

Fore­most among the board’s recom­mend­a­tions was the sugges­tion that Face­book should exam­ine and issue a public report on the ways in which the plat­form’s design, policy, and advert­ising decisions may have contrib­uted to the Jan. 6 attack on the U.S. Capitol. The company declined to do so, instead noting that it would make data cover­ing the weeks before and after the attack avail­able to its preex­ist­ing research part­ners. By rebuff­ing the board’s sugges­tion that it grapple with systemic issues that cannot be effect­ively addressed by decisions on indi­vidual posts (even those of an influ­en­tial figure like Trump), Face­book has made it clear that it is not ready to address funda­mental issues.

On the sens­it­ive topic of its informal inter­ac­tions with govern­ments, Face­book claimed that it was “fully imple­ment­ing” the board’s recom­mend­a­tion to “resist pres­sure from govern­ments to silence their polit­ical oppos­i­tion.” This does not seem a fair assess­ment of its hand­ling of this crit­ical issue. The company has long faced charges that it bows to govern­ment pres­sure to block polit­ical foes. It most recently made head­lines with reports that it blocked Palestini­ans shar­ing inform­a­tion about human rights viol­a­tions by Israel and that it barred posts crit­ical of Indian Prime Minis­ter Naren­dra Modi. In response to the board’s recom­mend­a­tion in the Trump case, Face­book cited exist­ing prac­tices for review­ing offi­cial govern­ment requests to remove content which it reports, but did not address how it handles the informal pres­sure with which the board seemed most concerned.

Given the dispar­ity between the volume of commu­nic­a­tions that Face­book handles and the limited number of cases the Over­sight Board will be able to consider, the board’s true value lies in the power of its recom­mend­a­tions to instig­ate changes in Face­book’s rules and their enforce­ment. But if – as appears to be the case – the company is able to get away with ignor­ing recom­mend­a­tions on the most crit­ical issues or falsely claim­ing it is “fully imple­ment­ing” recom­mend­a­tions without incor­por­at­ing any mean­ing­ful changes, the board’s effic­acy and the company’s good faith will be severely comprom­ised, and the adven­ture in account­ab­il­ity may quickly become irrel­ev­ant