Skip Navigation
Analysis

A Hate Speech Case in India Should Go to Facebook’s Oversight Board

The new Oversight Board may have the authority to decide violations of rules against hate speech by public figures but it would face an uphill battle on this type of case.

September 2, 2020

This originally appeared in Just Security.

Facebook’s decisions about which posts from public figures to keep and which to remove often come with high-profile controversies. Most recently, the Wall Street Journal reported that Facebook’s public policy chief for India quashed employees’ efforts to apply its rules against hate speech and content from organizations and individuals the company deems dangerous to posts by T. Raja Singh, a member of Indian Prime Minister Narendra Modi’s party, and three other Hindu nationalists. The Journal reports that Singh’s Facebook posts called for Rohingya Muslim immigrants to be shot and called traitors and for mosques to be destroyed, dangerous messages considering that violence against Indian Muslims has escalated under Modi’s Hindu nationalist government.

Current and former Facebook employees told the Journal that the reason the policy chief gave for the decision to ignore what appear to be clear rule violations was that taking action against the violators might hurt the company’s business prospects in India, a factor acknowledged by the company. The newspaper also cites examples of content decisions that tended to favor Hindus over Muslims and evidence that the head of public policy herself might be supportive of both Modi and of anti-Muslim views.

As the company struggles to respond, its newly-established Oversight Board suggested to Reuters that the panel has a role to play in this type of case. The board told the news agency in a statement that it has the authority to decide “[h]ow Facebook treats posts from public figures that may violate community standards,” including rules against hate speech, and “won’t shy away from the tough cases and holding Facebook accountable.”

But the board, funded by $130 million from Facebook and including many eminent human rights lawyers, is not yet up and running. Even it were functioning today, it would face an uphill battle on this case and others of its ilk.

The Oversight Board’s Jurisdiction

As a threshold matter, a case like this probably would not be heard by the board unless Facebook itself brings it forward. Under its charter and bylaws, the Oversight Board is currently only empowered to hear appeals from decisions to remove content, not decisions to leave it up. At issue in the India case is Facebook’s decision to allow posts that apparently violated their community standards. The board also doesn’t get to weigh in on which organizations and individuals are designated as “dangerous” and not allowed on the platform. That decision is made by Facebook.

Facebook could bring this type of case to the board, which would allow it to share the burden of enforcing rules against powerful voices. While a Modi or a Trump may not be tempered by the judgment of human rights experts, the company would gain support from a swath of civil society and governments. And it could avoid a fiasco of the type that took place in Myanmar, where the platform was used by military personnel to garner support for a campaign of ethnic cleansing against the Rohingya Muslims.

If the India case made its way to the Oversight Board, that entity is bound to apply Facebook’s rules, called community standards, which are designed to give Facebook broad discretion to allow a politician’s post that would otherwise violate its rules.

In 2016, Facebook faced intense criticism for suspending, on the basis of its rules against nudity, an account that posted an iconic Vietnam War-era picture of a naked nine-year old girl screaming in pain in the wake of a Napalm attack. Shortly thereafter, the company created a newsworthiness exception, under which it allows “content which would otherwise go against our Community Standards – if it is newsworthy and in the public interest.”

But in 2019, that rule effectively became a safe harbor for politicians. Facebook announced that, except for ads, it would “treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” According to the company, in determining newsworthiness, it “evaluate[s] the public interest value of the piece of speech against the risk of harm,” and considers country-specific factors (e.g., is there a war or an election, does the post relate to governance or politics and the political structure of the country). It also considers whether the post “has the potential to incite violence.”

A Potential Framework

How would the Oversight Board analyze a decision that is discretionary by design? It has already signaled that it won’t take a deferential posture, so it will need to find a suitable analytical framework. Its charter instructs the board to look to the values identified by the company — voice, authenticity, safety, privacy, and dignity – to inform its interpretation of the community standards. But while these values seem reasonable enough, they have not been fleshed out or weighed against each other outside the company.

The charter and bylaws also point to international human rights standards as a potential source of law for the board to apply, as has been suggested by David Kaye, the U.N. special rapporteur for freedom of expression, among other experts. The charter and bylaws ask the board to “pay particular attention to the impact of removing content in light of human rights norms protecting free expression.” Board members are to “participate in training[s]… on international human rights standards,” and the panel’s annual report must include an “analysis of how the board’s decisions have considered or tracked the international human rights implicated by a case.” Notifications of the board’s decisions are to be “guided by relevant human rights principles.”

While falling short of an explicit direction to apply international human rights law, these provisions give the board a basis for turning to the rich jurisprudence and widespread understanding of international human rights law as a basis for analyzing content moderation decisions.

Facebook’s impulse in initially restricting the board to reviewing only takedowns is understandable. After all, this is a new institution being created from whole cloth, with its every move closely watched. But the current global information and communications environment demands more.

It is not too late for the Oversight Board to weigh in on the case; a spokesperson for Facebook has said that they are still considering a ban, and by some reports the board will be functioning by October, which is just weeks away. The company should step up to the plate – and empower the board to do so as well by bringing this issue to it as soon as possible.