Skip Navigation
Analysis

How Trump’s Pressure on Google Threatens Government Manipulation of Search Results

As governments find new ways to influence tech companies, all of us who find our information online and use social media platforms must pull in the opposite direction so that we maintain access to a range of sources and views and have true opportunities to express ourselves online.

September 5, 2018

Cross-posted from Just Secur­ity

Tech behemoths just can’t catch a break these days.

Just as they are strug­gling to meet calls to contain the spread of misin­form­a­tion and hate speech online, they are being bashed by conser­vat­ives for allegedly suppress­ing right-wing voices. As is his habit, Pres­id­ent Trump added fuel to the fire, taking to Twit­ter to complain that Google was rigging search results to hide posit­ive stor­ies about his admin­is­tra­tion. While the First Amend­ment almost certainly prevents Trump from directly regu­lat­ing Google, the admin­is­tra­tion can exert pres­sure on tech compan­ies to adjust their prac­tices and achieve much the same result: manip­u­lat­ing what the public sees.

In response to Trump’s recrim­in­a­tions, Google declared that its search algorithms are “not used to set a polit­ical agenda, and we don’t bias our results toward any partic­u­lar ideo­logy.” While there is little evid­ence of polit­ical bias, we can’t really eval­u­ate how Google’s algorithm surfaces partic­u­lar results — it’s secret. Regard­less of whether search results are biased or not, a slew of federal courts have held that compan­ies like Google and Face­book clearly engage in both speech and press activ­it­ies when they display content created by third parties. In 2017, for example, a Flor­ida federal court dismissed a chal­lenge to Google’s decision to delist partic­u­lar sites. Accord­ing to the court, Google’s free speech rights protect its list­ing and delist­ing decisions “whether they are fair or unfair, or motiv­ated by profit or altru­ism.”

Just last year, the Supreme Court made clear that it is wary of govern­ment attempts to inter­fere with the Inter­net. It struck down a North Caro­lina law that made it a felony for registered sex offend­ers to access certain sites, declar­ing that cyber­space and partic­u­larly social media are the most import­ant places for the exchange of views. And earlier this year, a federal court held that the pres­id­ent’s personal Twit­ter account was a consti­tu­tion­ally protec­ted forum from which users could­n’t be blocked. These cases may set up an even­tual conflict between the speech rights of plat­forms and those of users, but they both point to judi­cial hostil­ity to govern­ment inter­fer­ence.

While direct regu­la­tion is likely off the table, polit­ical pres­sure can — and has — led compan­ies to adjust what inform­a­tion is displayed on their sites. In 2016, Congres­sional Repub­lic­ans lambasted Face­book for allow­ing its employ­ees to manip­u­late the “Trend­ing” part of its news­feed to demote conser­vat­ive sources. Face­book bent over back­wards to quell the uproar, and even­tu­ally changed the way it selec­ted Trend­ing stor­ies, repla­cing the curat­orial team with engin­eers who had a more mech­an­ical role in approv­ing stor­ies gener­ated by the Trend­ing algorithm.

In the last couple of years, plat­forms have succumbed to govern­ment calls to remove “terror­ist” and “extrem­ist” content, rely­ing on internal stand­ards about permiss­ible speech. After initially claim­ing that there was no “magic algorithm” that allowed them to distin­guish terror­ist content, they now routinely tout how much mater­ial they have deleted with the help of algorithms. In April, Zuck­er­berg told Congress that Face­book was able to flag “99 percent of the ISIS and al Qaeda content … before any human sees it.” The company’s trans­par­ency report shows that in the first quarter of 2018, it took action on 1.9 million pieces of content, such as posts, images, videos, or comments. Twit­ter claims that it has suspen­ded over 1.2 million accounts for terror­ist content since August 2015.

But for all the compan­ies’ (increas­ing) efforts at trans­par­ency, we cannot avoid the fact that these numbers reflect judge­ments made by Face­book and Twit­ter behind closed doors, and we have no way of know­ing whether they are reas­on­able or the extent to which they reflect biases. We do know that in their attempts to remove terror­ists from their plat­forms, they have targeted all manner of polit­ical speech, from Palestinian journ­al­ists to Kash­miri activ­ists.

In some ways, social media plat­forms have become a victim of their own claimed abil­ity to find and remove speech. Germany recently passed the NetzDG law, which requires large social media plat­forms to remove a broad range of posts, from insults of public office to real threats of viol­ence, within 24 hours or face fines up to 50 million euro. A similar law is in the works at the European Union level.

Normally, demo­cratic govern­ments that wish to prevent speech must publicly identify the speech and speaker they wish to censor and convince a neut­ral decision-maker that they have met the applic­able legal stand­ard — a tough sell under free speech prin­ciples. But govern­ments are now incentiv­iz­ing tech compan­ies to remove certain speech, either directly through regu­la­tion or indir­ectly through polit­ical bully­ing. Govern­ment attempts to influ­ence the inform­a­tion the public sees are being moved into mostly secret corpor­ate processes. Equally import­ant, the check provided by judi­cial review is elim­in­ated. These moves exacer­bate already seri­ous concerns about the control that social media plat­forms and search engines already exer­cise over inform­a­tion and commu­nic­a­tion.

As govern­ments find new ways to influ­ence tech compan­ies, all of us who find our inform­a­tion online and use social media plat­forms must pull in the oppos­ite direc­tion so that we main­tain access to a range of sources and views and have true oppor­tun­it­ies to express ourselves online.

(Image: Getty)