Facebook Instant Articles

When it comes to our data privacy, we don’t really have a choice

The debate around personal data has largely focused on privacy as an individual choice. The Cambridge Analytica scandal shows how little choice we actually have
Stephanie MacLellan
Mark Zuckerberg
File - In this April 4, 2013 file photo, Facebook CEO Mark Zuckerberg walks at the company’s headquarters in Menlo Park, Calif. Facebook is having one of its worst weeks as a publicly traded company with a share sell-off continuing for a second day. Britain’s Commissioner Elizabeth Denham told the BBC that she was investigating Facebook and has asked the company not to pursue its own audit of Cambridge Analytica’s data use. Denham is also pursuing a warrant to search Cambridge Analytica’s servers. (AP Photo/Marcio Jose Sanchez, File)

Stephanie MacLellan is a research associate at the Centre for International Governance Innovation, specializing in internet governance, cybersecurity and digital rights.

The Edward Snowden leaks of 2013 sparked outrage and scrutiny about how much access state security agencies have to our online activities and mobile phone data. The public discussion that followed was largely based on the assumption that the privacy over-reaches we needed to worry about would come from government actors. Those who favoured some sacrifices to privacy in the name of protecting society fell back on a familiar refrain: only people with something to hide should be worried about privacy and surveillance.

Instead, five years later, we are finally beginning to understand just how much information we have allowed private companies such as Facebook to gather about us, and how that can affect our societies on a broad scale—not just those of us who might be hiding something.

This week, The New York Times and The Observer reported that a shadowy political agency called Cambridge Analytica collected data from 50 million user profiles. Of that number, only about 270,000 users agreed to hand over access to their profile data by downloading a third-party survey app; the rest were sucked in because they had a friend who granted access, most likely without their knowledge. The data, collected in 2014, was supposed to be used for research, but the reports say it formed the basis of voter targeting efforts in the 2016 U.S. election, with campaign ads micro-targeted to the audiences most likely to react to them. Facebook has since suspended Cambridge Analytica, claiming its data hoard was supposed to have been deleted years ago, and its creator Mark Zuckerberg is now facing calls to the carpet from governments around the world. “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” he said in a statement days after the report.

But what these revelations have underscored is just how widely the mainstream debate about online privacy has missed the point. We’re giving away our data, yes—but the question is no longer whether we choose to opt in, but what it is we’re even opting into.

READ MORE: Facebook and Cambridge Analytica confirm that online privacy is dead

Facebook, after all, refused to call this a data breach. This is accurate: the data collection process worked exactly as it was supposed to at the time, with the permission of users who downloaded the survey app. Facebook later changed its policy to prevent apps from gaining access to users’ friends’ information—but this was normal, consensual data collection that, in the hands of Cambridge Analytica, had unseen consequences.

Those details we reveal on Facebook—our likes and angry faces, our friends, the posts we click on—can reveal staggering amounts of information about us, from our political leanings, to our sexual orientation, to whether we are depressed. The growing use of algorithms and machine learning by digital platforms can amplify the effects of this data harvesting and interpretation. Members of the “nothing to hide” camp may have come to grips with someone else being able to see their chat messages or intimate pictures, but were they ready to accept private companies creating a psychological profile of them and using it to try to manipulate their political behaviour? Or to increase political polarization across a society at large with inflammatory ads?

And that’s the issue. The debate around online privacy has been presented as a clear choice: if you opt in by clicking on a Terms of Service agreement—which you probably haven’t read—you agree to share some of your private data. But the Cambridge Analytica case demonstrates that we can’t really make an informed decision about which potential uses of our data we are willing to accept because not even the platform companies can fully anticipate them. Or as digital researcher Zeynep Tufekci wrote in a New York Times column: “consent to ongoing and extensive data collection can be neither fully informed nor truly consensual.”

READ MORE: So, when do we start caring about privacy?

Social networks have made steps toward offering easier-to-navigate privacy settings for their users, but they still fall far short of offering clear explanations of how various elements of user data can be used and allowing users to explicitly opt in or out for each of them. This would allow users more control of how much data they want to protect without having to forego the platform entirely, which would be simply unfeasible for many people as social media becomes more integrated into their daily lives. Regulations around the use of consumer data, such as the General Data Protection Regulation coming into effect in the European Union this May, might help in this regard.

The Cambridge Analytica scandal confronts us with the fact that privacy measures aren’t just a matter of individual choice—they have consequences for all members of society, even those who don’t use social media. It’s long past time to introduce real protections to keep the personal information we entrust to social platforms from being exploited.