Skip Navigation
Analysis

Iran-Related Content Takedowns Highlight Questions Over Facebook’s New Oversight Board

The Facebook Oversight Board holds promise for the future of monitoring sensitive social media content, but there are limitations.

February 27, 2020

This was originally published by Just Security.

Facebook’s recent release of the bylaws for its new Oversight Board, which will be charged with reviewing takedown decisions by Facebook and Instagram, came on the heels of reports that Instagram removed posts by Iranian journalists and activists on the U.S. killing of Qassem Soleimani, a general in the Islamic Revolutionary Guard Corps (IRGC). These removals illustrate both the promise and the limitations of the forthcoming board, whose role was first described in its charter and is fleshed out in the bylaws.

Government control of the internet in Iran has made Instagram one of the few remaining social media platforms in the country, amplifying its role as a home for political speech. After a U.S. drone strike killed Soleimani, Iranians flocked to Instagram to share their reactions. This could have been a quintessential moment to showcase the value of social media platforms in empowering diverse voices. Instead, at least 15 Iranian journalists, as well as Iranian human rights advocates and activists, reported that Instagram suspended their accounts or removed some of their posts. While the full scope of these removals is not known, one Iran internet researcher claimed the actions were widespread: “Every person I saw that posted about Soleimani on Instagram — almost all of their posts have been removed.”

The removals captured four different posts from Emadeddin Baghi, a prominent Iranian human rights advocate and investigative journalist, who expressed mixed feelings about Soleimani’s death but denounced the U.S. drone strike as “contrary to the principles of international law.” Alireza Jahanbakhsh, a popular Iranian soccer player, reported that Instagram took down a picture of Soleimani he posted in the wake of the attack.

Law vs. Community Standards

Instagram’s response did not specify whether it was removing posts and accounts because it believed that it was required to do so by law or was doing so voluntarily under its Community Standards’ prohibition on terrorist content. The response stated: “We review content against our policies and our obligations to U.S. sanctions laws, and specifically those related to the U.S. government’s designation of the IGRC and its leadership as a terrorist organization.”

The use of the term sanctions has created some confusion, but it is likely that the spokesperson was referring to the prohibition on providing material support to designated foreign terrorist organizations. The Revolutionary Guard has been subject to economic sanctions since 2017, when the Trump administration labeled it a Specially Designated Global Terrorist. But Instagram and Facebook did not begin removing accounts associated with the IRGC until 2019, when the group was designated as a Foreign Terrorist Organization (FTO).

Knowingly providing material support or resources (e.g., money, facilities, and personnel) to an FTO is a crime. But any argument that the FTO designation requires shutting down the social media accounts of members of the Revolutionary Guard does not extend to removals of unaffiliated users. Under the Supreme Court’s decision in Holder v. Humanitarian Law Project, “only material support coordinated with or under the direction of a designated foreign terrorist organization” is prohibited. It is quite a stretch to argue that posts of users weighing in on a major public event would fall under this definition. Indeed, the court specifically held that “[i]ndependent advocacy that might be viewed as promoting the group’s legitimacy is not covered.”

Instagram’s Community Standards, which for these purposes are the same as those of its parent company, Facebook, prohibit speech that praises or supports “terrorist organizations.” However, the groups Facebook considers to be “terrorist” has long been opaque and contentious. In the wake of the Soleimani takedowns, Facebook for the first time told the Washington Post that it removes “posts commending or supporting groups labeled foreign terrorist organizations by the U.S. State Department.” It is unclear if this reliance on U.S. FTO designations is followed by other platforms. For example, the Global Internet Forum to Counter Terrorism (GIFCT), an effort launched by Facebook, Microsoft, Twitter, and YouTube, relies on U.N. terrorist sanctions lists to facilitate removals. The Revolutionary Guard, however, is not on U.N. sanctions lists, and it does not appear that other platforms have followed Instagram’s lead in removing posts supporting Soleimani.

By relying on the U.S. designation of Soleimani as the basis for removing posts published by Iranians that could be construed as supportive of him, Facebook raises questions about its stated commitment to the values of free expression and ensuring that its users have a “voice.” The prohibition on “support or praise” – which is far broader than a ban on incitement or threats of violence – is both difficult to define objectively and almost certainly sweeps in speech that is ambiguous, such as Baghi’s posts.

Based on the experience of GIFCT, which reported that 85.5 percent of the posts it identified for removal were for “glorification” of terrorism and only 0.4 percent were for imminent and credible threats, it is likely that the vast majority of the posts removed by Facebook and other companies fall into the more nebulous category of speech supportive of Soleimani. As U.N. Special Rapporteur Fionnuala Ní Aoláin (full disclosure: she serves as an executive editor of Just Security), wrote to Mark Zuckerberg in July 2018, Facebook’s “overly broad and imprecise definitions” of terrorism “may lead to indiscriminate implementation, over-censoring and arbitrary denial of access to and use of Facebook’s services.”

Potential Role of Oversight Board

The Oversight Board was established so Facebook would not have to “make so many important decisions about free expression and safety” on its own, Zuckerberg wrote in November 2018.  While the board, which will eventually be made up of 11 to 40 members from around the world, is not yet up and running, its charter and bylaws provide a basis for considering whether the board would be able to protect Facebook users’ speech in Soleimani-type situations in the future.

There are some obvious limitations on the board’s authorities, but also the potential for it to impact Facebook’s approach to these issues in positive ways. For one thing, the board, for now at least, it deals only with decisions to remove posts and does not address determinations about account suspensions.

As for posts, if Instagram did actually remove the posts of Iranians based on the material support statute, the Oversight Board likely will not be able to impact the company’s decisions. According to its charter, in cases “where the board’s decision on a case could result in criminal liability or regulatory sanctions, the board will not take the case for review.” The bylaws specify further that the board has no jurisdiction in cases where “the underlying content is criminally unlawful in a jurisdiction with a connection to the content (such as the jurisdiction of the posting party and/or the reporting party)” and where a board decision to allow the content to remain on the platform “could” lead to “criminal liability” or “adverse governmental action” against the company or the board.

The use of the expansive “could” suggests that the board has no authority to hear cases where there is even a possibility of criminal or regulatory liability. But neither the charter nor the bylaws specify whether the board will make the decision or that authority rests with the company. In any event, Facebook has preserved for itself the right not to implement board decisions where doing so “could violate the law,” thus limiting the impact of any attempt by the board to intervene.

For the most part, however, Facebook does not rely on legal rules to remove content. When the company receives a report that content violates local law, it first evaluates whether the content violates Community Standards. If it does, then it is removed across the platform. Only if Facebook finds that the content doesn’t violate its internal rules does the company turn to a consideration of legal rules. A violation of local law can result in removals, which are restricted to the country or region where it is alleged to be illegal.

In other words, even though the Soleimani posts are not illegal in either of the most relevant jurisdiction – Iran (where he was celebrated) and the U.S. (as discussed above) – Facebook can remove them worldwide under its internal rules.

If the Soleimani posts were deleted under the Community Standards, as seems likely despite the company’s ambiguous statements, the board would be able to hear appeals from people who believe their speech had been wrongfully removed (confirmed by a Facebook spokesperson in a press call). It is unclear, however, whether the board is bound by Facebook’s secret list of “dangerous” organizations, which is the basis of takedowns under these standards.

Moreover, under Article 1, Section 3 of the bylaws, the board is limited to deciding on whether posts should be reinstated “in accordance with Facebook’s content policies and values.” The board’s founding charter includes similar language, but also notes that when making these decisions, “the board will pay particular attention to the impact of removing content in light of human rights norms protecting free expression.” This leaves open the question of how the board should decide when there is a conflict between the company’s content policies (which take in a swath of political speech under the heading of terrorist content) and human rights free speech norms (which require narrow tailoring of restrictions on speech).

Article 1, Section 4 of the board’s charter also authorizes the board to “provide policy guidance, specific to a case decision or upon Facebook’s request, on Facebook’s content policies.” This provides an opportunity for the Oversight Board to exert pressure on the company to modify its policies. While Facebook is not obligated to follow the board’s guidance, it has committed to publicly explaining its response, and there will be significant public pressure for it to take serious action.

A last point that bears mention is that the board will not be able to shape Facebook’s impact on discourse in anywhere near real-time. Review is available only after the company’s internal appeals mechanisms have been exhausted and if the board selects the case. According to the bylaws, the “timeframe for case decisions and implementation will be a maximum of ninety (90) days, starting from Facebook’s last decision on the case under review.” While the bylaws do contemplate a 30-day expedited review process triggered by Facebook in “exceptional circumstances,” even this is unlikely to be timely enough to impact the platform’s response to fast-moving events.

Facebook has, however, committed to ferreting out “identical content with parallel context” and taking action on it (subject to technical, operational, and legal restrictions). It also has undertaken to “be informed by the board’s decisions when refining policy in separate or similar context.” These commitments provide a way for an individual case to have a ripple effect, and could also help bring about a recalibration of its Community Standards and implementation protocols.

Platform removals of “terrorist content” rarely make a splash. Mostly, companies are focused on volume rather than quality, and the terrorism label dulls suspicion for many. The Soleimani drone strike is a rare illustration of how, in the name of combating terrorism, social media platforms can exclude key voices from the global conversation.

The Oversight Board has an important role to play in checking this power. Its decisions will be closely watched, as will the company’s implementation of them and its response to the board’s policy recommendations, to evaluate whether the new board fulfills this critical function.