Skip Navigation

Trump’s Executive Order to Retaliate Against Twitter’s Fact-Checking

The order attempts to trigger regulatory scrutiny in a way that can only be read as an attempt to punish Twitter, an obvious violation of the First Amendment.

May 29, 2020

This origin­ally appeared in Just Secur­ity.

Pres­id­ent Donald Trump has been spoil­ing for a fight with Twit­ter and Face­book for a while now. Both compan­ies have been under attack for suppress­ing conser­vat­ive view­points, although there is scant evid­ence for this alleg­a­tion. In the face of this pres­sure, they have been wary of step­ping on Trump’s toes, even going so far as to develop policies that essen­tially exempt politi­cians from the types of constraints that they impose on the rest of us.

Twit­ter broke with tradi­tion this week by slap­ping a label on two Trump tweets in which he falsely claimed that voting by mail would lead to fraud (a viol­a­tion of Twit­ter’s rules against elec­tion misin­form­a­tion). Trump vowed strong action to stop this form of what he considers private censor­ship and yester­day after­noon issued an exec­ut­ive order on “Prevent­ing Online Censor­ship.”

Undeterred, Twit­ter this morn­ing took even stronger action on a White House tweet retweet­ing a Trump message that took aim at activ­ists protest­ing the killing of George Floyd in Minnesota. The White House tweet declared the milit­ary would take control and shoot loot­ers. Twit­ter covered the tweet with a screen indic­at­ing that the contents viol­ated the company’s rules against glor­i­fy­ing viol­ence (although users can click through and see it) and limited the reach of the Trump missive by block­ing shar­ing, replies, and “likes.”

Trump’s exec­ut­ive order taps into a range of concerns – from across the polit­ical spec­trum – about the power and fail­ings of social media compan­ies. The order capit­al­izes on such complaints to try to trig­ger regu­lat­ory scru­tiny in a way that can only be read as an attempt to punish Twit­ter for fact-check­ing Trump’s tweet, an obvi­ous viol­a­tion of the First Amend­ment.  The order attempts to re-write a stat­ute passed by Congress and repeatedly inter­preted and applied by courts to limit the liab­il­ity of social media compan­ies for their decisions to allow or remove posts. It seeks to keep federal agen­cies from advert­ising on “biased” social media plat­forms, while the pres­id­ent’s own re-elec­tion campaign spends millions on polit­ical ads on the very same plat­forms.

On some level, it looks like the pres­id­ent is throw­ing things at the wall to see what sticks. But even if the order does not actu­ally lead to action, the threat of regu­lat­ory pres­sure is aimed at bully­ing social media compan­ies into continu­ing their hands-off approach to Trump.

Parts of the order echo themes that people writ­ing about content moder­a­tion have long soun­ded. Free speech is a funda­mental value. Social media is the new public square. The major plat­forms have an outsized impact on public discourse. Plat­forms need to be trans­par­ent and account­able. The order frames all this in the language of right-wing griev­ance, but those of us concerned about the suppres­sion of minor­ity voices and disad­vant­aged groups have raised these concerns as well.

The order first takes aim at Section 230 of the Commu­nic­a­tions Decency Act, which gives compan­ies like Face­book and Twit­ter immunity from civil liab­il­ity both for host­ing and restrict­ing access to content produced by their users. This law, too, has been the target of crit­ics from the left and right and many in between. House Speaker Nancy Pelosi has said that tech compan­ies are using Section 230 to avoid taking respons­ib­il­ity for misin­form­a­tion and hate speech. Last year, Senator Josh Hawley (R-MO) intro­duced a bill that would condi­tion Section 230 immunity for big social media compan­ies on their abil­ity to demon­strate that their content-moder­a­tion policies and prac­tices were not polit­ic­ally biased.

Two provi­sions of Section 230 (repro­duced below) are at play, and the order aims to combine them in a way that is both contrary to the stat­utory language and designed to be maxim­ally threat­en­ing to social media plat­forms.

(c) Protec­tion for “Good Samar­itan” block­ing and screen­ing of offens­ive mater­ial

(1) Treat­ment of publisher or speaker

No provider or user of an inter­act­ive computer service shall be treated as the publisher   or speaker of any inform­a­tion provided by another inform­a­tion content provider.

(2) Civil liab­il­ity

No provider or user of an inter­act­ive computer service shall be held liable on account of—

            (A) any action volun­tar­ily taken in good faith to restrict access to or avail­ab­il­ity of            mater­ial that the provider or user considers to be obscene, lewd, lasci­vi­ous, filthy, excess­ively viol­ent, harass­ing, or other­wise objec­tion­able, whether or not such mater­ial       is consti­tu­tion­ally protec­ted; or

            (B) any action taken to enable or make avail­able to inform­a­tion content providers or       others the tech­nical means to restrict access to mater­ial described in para­graph (1).

Section 230 provides two types of immunity for covered entit­ies. Under Section 230(c)(1), compan­ies like Face­book and Twit­ter shall not to be treated as publish­ers or speak­ers of content created by their users. This immun­izes the plat­forms from liab­il­ity for fail­ing to remove unlaw­ful content, which is crit­ical to their abil­ity to oper­ate. Free speech advoc­ates are fans because absent such protec­tion, the plat­forms would be incentiv­ized to remove a broad swath of posts and tweets out of fear of liab­il­ity. But Section 230(c)(1) does not require the plat­forms to act in good faith to avoid liab­il­ity.

Section 230(c)(2)(a), on the other hand, protects plat­forms against wrong­ful take­down claims if they have acted in good faith to restrict access to content that they consider to be objec­tion­able. Iron­ic­ally, Trump has benefited from the immunity created by Section 230(c)(1); in its absence, plat­forms would be much more aggress­ive in delet­ing posts that could implic­ate liab­il­ity.

draft of the exec­ut­ive order that was leaked earlier in the day yester­day said it is the “policy” of the United States that a company that does­n’t meet the good-faith require­ment of (c)(2) is acting in an edit­or­ial capa­city and loses immunity under (c)(1). While this argu­ment is not new, it directly contra­dicts the stat­utory language. In 2018, when Congress set out to force plat­forms to crack down on sex traf­fick­ing, it passed the Fight Online Sex Traf­fick­ing Act (FOSTA), which expli­citly removed Section 230(c)(1) immunity for publish­ing third-party content related to sex traf­fick­ing. When courts have been asked to apply the “good faith” require­ment of (c)(2) to the immunity provi­sions of (c)(1), they have declined to do so, citing the clear text of the stat­ute.

The final version of the order pulls back from this posi­tion, instead stat­ing that when a company “removes or restricts access to content and its actions do not meet the criteria of subpara­graph (c)(2)(A), it is engaged in edit­or­ial conduct” and as a matter of U.S. policy, “should prop­erly lose the limited liab­il­ity shield of subpara­graph (c)(2)(A) and be exposed to liab­il­ity like any tradi­tional editor and publisher that is not an online provider.”

But the order does­n’t give up on its goal of import­ing the good-faith require­ment into the (c)(1) liab­il­ity shield. It directs the commerce secret­ary, acting in consulta­tion with the attor­ney general, to peti­tion the Federal Commu­nic­a­tions Commis­sion (FCC) to exped­i­tiously propose regu­la­tions:

[T]o clarify and determ­ine the circum­stances under which a provider of an inter­act­ive computer service that restricts access to content in a manner not specific­ally protec­ted by subpara­graph (c)(2)(A) may also not be able to claim protec­tion under subpara­graph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content avail­able and does not address the provider’s respons­ib­il­ity for its own edit­or­ial decisions.

In other words, if a company is not acting in good faith in remov­ing content, can it also be shiel­ded from liab­il­ity when it does­n’t remove content? Relatedly, the FCC is asked to weigh in on what it means for plat­forms to act “in good faith” when remov­ing content under Section 230(c)(2), with the order high­light­ing the role of “pretextual removals” and process concerns as relev­ant to this determ­in­a­tion.

As many have poin­ted out, the FCC is an inde­pend­ent agency and not directly under Trump’s control. It has touted a hands-off stance on inter­net regu­la­tion, famously undo­ing the Obama admin­is­tra­tion’s net neut­ral­ity order. And it can’t really tell Twit­ter how and when to remove content without running afoul of the First Amend­ment’s prohib­i­tion on govern­ment inter­fer­ence with private speech. Any attempt to do so would surely lead to a court chal­lenge, hamstringing action by the FCC.

The order also seeks to trig­ger action by the Federal Trade Commis­sion (FTC), suggest­ing that it invest­ig­ate plat­forms’ imple­ment­a­tion of content-moder­a­tion rules as “unfair or decept­ive prac­tices” and whether large plat­forms (specific­ally call­ing out Twit­ter) are viol­at­ing laws. In eval­u­at­ing complaints, the order says, the FTC should refer to an earlier section of the order that makes a breath­tak­ingly broad claim – that it is “the policy of the United States that large online plat­forms, such as Twit­ter and Face­book, as the crit­ical means of promot­ing the free flow of speech and ideas today, should not restrict protec­ted speech.” Of course, plat­forms restrict protec­ted speech all the time. That is the essence of content moder­a­tion, and Section 230 gives them the abil­ity to do so without incur­ring liab­il­ity. It is diffi­cult to see the FTC jump­ing into this fraught issue, espe­cially given that it has shown little appet­ite for jump­ing into content moder­a­tion issues.

Open­ing yet another front, the order directs the attor­ney general to set up a work­ing group with state attor­neys general to consider the poten­tial enforce­ment of state stat­utes prohib­it­ing unfair and decept­ive prac­tices and produce model legis­la­tion for states that don’t have such laws on the books. The work­ing group also is direc­ted to collect inform­a­tion about a range of conser­vat­ive griev­ances, such as the reli­ance on third-party fact check­ers with “indi­cia of bias,” the demon­et­iz­a­tion of accounts that traffic in misin­form­a­tion, and the perceived down­rank­ing of conser­vat­ive content. While the FCC and FTC seem like the big guns in this fight, this provi­sion might give new impetus to some states’ attempts to take on social media compan­ies on the basis of claims of bias, despite the provi­sions of Section 230.

Finally, the order directs Attor­ney General William Barr to propose federal legis­la­tion that would accom­plish the order’s policy object­ives, essen­tially invit­ing Barr to expand on his previ­ous attacks on Section 230.

None of the actions contem­plated in the order may come to pass. But that is almost beside the point.

In the lead-up to an elec­tion that is widely predicted to be marred by a mael­strom of misin­form­a­tion and when Trump’s abil­ity to hold in-person rallies is limited by the coronavirus pandemic, the exec­ut­ive order seeks to clear the way for his abil­ity to share his mix of misin­form­a­tion and inflam­mat­ory content without push­back or fact-check­ing from social media plat­forms.

Even before the exec­ut­ive order took effect, Face­book’s Mark Zuck­er­berg took to Fox News to argue, contro­ver­sially, that he does­n’t believe plat­forms should be “arbit­ers of truth” for politi­cians who are widely fact-checked by tradi­tional media. Twit­ter seems to have taken the oppos­ite approach, doub­ling down on apply­ing its rules to the pres­id­ent, as well as other world lead­ers. It remains to be seen how well Trump’s attempt at bully­ing his favor­ite social media sites will work.