Skip Navigation

How Trump’s Pressure on Google Threatens Government Manipulation of Search Results

As governments find new ways to influence tech companies, all of us who find our information online and use social media platforms must pull in the opposite direction so that we maintain access to a range of sources and views and have true opportunities to express ourselves online.

September 5, 2018

Cross-posted from Just Security

Tech behemoths just can’t catch a break these days.

Just as they are struggling to meet calls to contain the spread of misinformation and hate speech online, they are being bashed by conservatives for allegedly suppressing right-wing voices. As is his habit, President Trump added fuel to the fire, taking to Twitter to complain that Google was rigging search results to hide positive stories about his administration. While the First Amendment almost certainly prevents Trump from directly regulating Google, the administration can exert pressure on tech companies to adjust their practices and achieve much the same result: manipulating what the public sees.

In response to Trump’s recriminations, Google declared that its search algorithms are “not used to set a political agenda, and we don’t bias our results toward any particular ideology.” While there is little evidence of political bias, we can’t really evaluate how Google’s algorithm surfaces particular results — it’s secret. Regardless of whether search results are biased or not, a slew of federal courts have held that companies like Google and Facebook clearly engage in both speech and press activities when they display content created by third parties. In 2017, for example, a Florida federal court dismissed a challenge to Google’s decision to delist particular sites. According to the court, Google’s free speech rights protect its listing and delisting decisions “whether they are fair or unfair, or motivated by profit or altruism.”

Just last year, the Supreme Court made clear that it is wary of government attempts to interfere with the Internet. It struck down a North Carolina law that made it a felony for registered sex offenders to access certain sites, declaring that cyberspace and particularly social media are the most important places for the exchange of views. And earlier this year, a federal court held that the president’s personal Twitter account was a constitutionally protected forum from which users couldn’t be blocked. These cases may set up an eventual conflict between the speech rights of platforms and those of users, but they both point to judicial hostility to government interference.

While direct regulation is likely off the table, political pressure can — and has — led companies to adjust what information is displayed on their sites. In 2016, Congressional Republicans lambasted Facebook for allowing its employees to manipulate the “Trending” part of its newsfeed to demote conservative sources. Facebook bent over backwards to quell the uproar, and eventually changed the way it selected Trending stories, replacing the curatorial team with engineers who had a more mechanical role in approving stories generated by the Trending algorithm.

In the last couple of years, platforms have succumbed to government calls to remove “terrorist” and “extremist” content, relying on internal standards about permissible speech. After initially claiming that there was no “magic algorithm” that allowed them to distinguish terrorist content, they now routinely tout how much material they have deleted with the help of algorithms. In April, Zuckerberg told Congress that Facebook was able to flag “99 percent of the ISIS and al Qaeda content … before any human sees it.” The company’s transparency report shows that in the first quarter of 2018, it took action on 1.9 million pieces of content, such as posts, images, videos, or comments. Twitter claims that it has suspended over 1.2 million accounts for terrorist content since August 2015.

But for all the companies’ (increasing) efforts at transparency, we cannot avoid the fact that these numbers reflect judgements made by Facebook and Twitter behind closed doors, and we have no way of knowing whether they are reasonable or the extent to which they reflect biases. We do know that in their attempts to remove terrorists from their platforms, they have targeted all manner of political speech, from Palestinian journalists to Kashmiri activists.

In some ways, social media platforms have become a victim of their own claimed ability to find and remove speech. Germany recently passed the NetzDG law, which requires large social media platforms to remove a broad range of posts, from insults of public office to real threats of violence, within 24 hours or face fines up to 50 million euro. A similar law is in the works at the European Union level.

Normally, democratic governments that wish to prevent speech must publicly identify the speech and speaker they wish to censor and convince a neutral decision-maker that they have met the applicable legal standard — a tough sell under free speech principles. But governments are now incentivizing tech companies to remove certain speech, either directly through regulation or indirectly through political bullying. Government attempts to influence the information the public sees are being moved into mostly secret corporate processes. Equally important, the check provided by judicial review is eliminated. These moves exacerbate already serious concerns about the control that social media platforms and search engines already exercise over information and communication.

As governments find new ways to influence tech companies, all of us who find our information online and use social media platforms must pull in the opposite direction so that we maintain access to a range of sources and views and have true opportunities to express ourselves online.

(Image: Getty)