Facebook's Data Mishap and the Case for Regulation
The Cambridge Analytica scandal exposes the mistake in allowing companies like Facebook to self-regulate in the first place.
Facebook chief Mark Zuckerberg this week conceded his company had “made mistakes” in allowing the data firm Cambridge Analytica to harvest information on 50 million users. The news that user data was shared more widely than most realized has sent Facebook leadership scrambling. And Cambridge Analytica, once portrayed as the crown jewel in President Trump’s 2016 digital campaign efforts, is coming under ever more scrutiny.
The incident calls into question just how well massive firms like Facebook manage and protect their users’ data. But it also exposes the mistake in allowing companies like Facebook to self-regulate in the first place. As this incident shows, we can’t wait for companies like Facebook – as well as Twitter and Google and other firms – to learn from their mistakes.
Nor do we have to wait for the full extent of the Cambridge Analytica scandal to come to light. The case of Russia’s interference in the 2016 presidential election already shows just how vulnerable these big social media companies are, and how urgent it is to bring them under a modern regulatory umbrella. Updating our disclosure laws for the social media age will give the American people an effective tool against propaganda from foreign adversaries and secretive data firms alike.
How did we get here? In 2014, research professor Aleksandr Kogan developed a Facebook app designed to perform psychological evaluations on consenting users based on Facebook use history and “likes.” A year later, Facebook learned that Kogan was passing information gleaned from the app to Cambridge Analytica. Kogan’s program also harvested and shared the likes and history of those consenting users’ friends. If you had a friend who used Kogan’s app (“thisisyourdigitallife,” which you can still access here), your data may have ended up in Cambridge Analytica’s hands.
Facebook did not intervene until a year after it knew Kogan was sharing the data. (In 2014, the social network stopped allowing third-party apps to collect data the way Kogan did.) Chris Wylie, a former Cambridge Analytica contractor turned whistleblower, received a letter from Facebook’s attorneys in August of 2016 asking him to destroy any user data in his possession. According to Wylie, they never followed up: no second letter, no confirmation, no forensic checks to ensure the data had been deleted.
In a similar vein, Facebook is still being opaque about the extent to which Russian trolls harnessed the platform’s vulnerabilities to spread misinformation to millions of Facebook users during the 2016 presidential campaign. It was initially reticent to even acknowledge the problem, denying any knowledge of Russian activity months after it was first reported. Even when Facebook finally disclosed the existence of the Russian advertisements, its general counsel, Colin Stretch, admitted that Facebook still does not know the extent of Russian meddling in 2016.
We know that this won’t be the last attempt at election manipulation – either by Russia, other foreign governments, or individual bad actors. At the same time, the intelligence community has warned us repeatedly that American elections remain vulnerable to the types of propaganda the Internet Research Agency was able to serve users in the absence of Facebook’s intervention.
We know, too, that there are steps we can take to protect ourselves.
Data operators understand their own weakness: disclosure. Cambridge Analytica executive Mark Turnbull explained that his operation “has to happen without anyone thinking, ‘That’s propaganda.’ Because the moment you think, ‘that’s propaganda,’ the next question is, ‘who’s put that out?’” Letting the American people know where their advertising comes from – be it Russian operatives or political campaigns using detailed user data – is the most powerful weapon we have against propaganda.
Now platforms are starting to learn that self-regulation may be their own weakness. Facebook’s stock is plummeting, it is under investigation in several jurisdictions, #DeleteFacebook is trending, and at least one major executive is stepping down in the midst of the scandal. Mark Zuckerberg gave a series of interviews to describe the steps Facebook needs to take to ensure its users privacy – it’s still not enough.
Facebook’s public image is not likely to improve until the public is convinced that it is safe from exploitation and propaganda. That will require strong regulations and tough, consistent enforcement.
Senators John McCain, Amy Klobuchar and Mark Warner introduced the Honest Ads Act to handle this exact scenario. The bill would bring online ads under disclosure rules that already apply to television and radio. The American people deserve to understand who and what is shaping the information landscape around our elections.
Zuckerberg recently said he supports the Honest Ads Act because transparency is “good for the internet.” This kind of common-sense regulation may also be the easiest way for Facebook to end this scandal and get back to business. Facebook needs it because self-regulation is not enough to protect its brand. The American people need it too, because it is our best shield against propaganda seeking to belittle the power of our democracy and the value of our vote.