The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, by Shoshana Zuboff, Profile Books, 704 pp, £25, ISBN: 978-1781256848
At a recent industry fair in China, a defence manufacturer proudly displayed the high-tech surveillance system it was developing, in the national effort to apply the ideas of military cyber procedures to civilian security. Monitoring and controlling millions of Uighur ethnic groups in Xinjiang province using data harvesting is well under way. Throughout the trade fair demonstration, the proud developers displayed their slogan on a large screen: “If someone exists, there will be traces. If there are connections, there will be information.” This is the vision of precise, all-seeing, infallible, mass surveillance. The platform on which it is built draws on databases with 68 billion records, tracking people’s activities and movements incessantly, sustaining fear in the population that the government can see into every corner of their lives.
This cutting-edge data-gathering system, which includes facial recognition records of millions of people, is available to local police on a mobile app, which can sift through billons of records in an instant. If someone is tagged a potential threat, a set of checkpoints across towns and cities triggers an alarm when a suspect tries to leave the neighbourhood or enter a public space. The system is programmed to harvest personal information in vague, broad categories of behaviour, indicators of suspiciousness, such as extended travel abroad, the use of “unusual” amounts of electricity, ceasing to use a smartphone, refuelling someone else’s car, avoiding the use of one’s front door in coming or going from home or making a donation to a mosque. Future plans include systematically gathering biological data (including DNA), fingerprint and voice recordings, iris scans and head portraits.
The drive for certainty in predicting a population’s behaviour underpins the wider Chinese project of a comprehensive personal credit scoring system, like Alibaba’s Sesame Credit. The big political vision is mass behaviour modification anchored in algorithms that reward, punish and shape social outcomes with a host of perks, including access to train travel, bank loans or better housing.
The implications of high-tech policing in China worry Human Rights Watch and other watchdogs able to peer behind the facade of the surveillance state. The Chinese approach to digital surveillance is proving attractive in eighteen other countries too, where China is building digital surveillance systems that will lead to a loss of privacy on an industrial scale far beyond China. Presented initially as a public security solution, the Chinese approach reveals a darker, dystopian ambition: a tool for political repression and autocratic control over society.
But are there other forms of digital surveillance that should concern us? The answer lies at the core of an outstanding new book by Shoshana Zuboff: The Age of Surveillance Capitalism: the Fight for a Human Future at the New Frontier of Power. This is an impressive volume of 700 pages, including 130 pages of detailed notes and (thankfully) an excellent index. Like a manifesto, it is passionately argued. Zuboff explores in great detail how the West too, is developing a digital surveillance system, very different from China’s. It is largely operating unobtrusively, under the radar of public awareness, and it is growing at a phenomenal rate as a major threat to both privacy and democracy.
In some ways, Zuboff’s book parallels other recent attempts to sound alarm bells about current threats to democracy. Since the Brexit referendum and Trump’s election in 2016, publishers’ catalogues have been filling with books detailing the seemingly unstoppable wave of far-right politics and the sadly dimming prospects for democracy. Titles include How Democracies Die, How Democracy Ends, How Fascism Works, Can it Happen Here? and Fascism: A Warning. Some of these make brief reference to the internet, but nowhere is there as sustained an examination of digital surveillance activities as in Zuboff’s work, which is likely to provide the scholarly foundation for many years to come of further critical interrogation of how surveillance capitalism is reshaping the world.
The first sign of how vulnerable people’s privacy would become in the online world was the discovery that Google’s Gmail, launched in 2004, could scan private correspondence to generate advertising. Google was in effect signalling that users’ privacy was at the mercy of the company owning the servers. Facebook followed suit by launching Beacon, which allowed its advertisers to track users across the internet and disclose personal information without permission, usurping people’s ability to control what they wanted to disclose. Public outrage was intense in both cases, but the genie was out of the bottle. Privacy could no longer be regarded as a social norm and decision rights over privacy became the key element in the new economic logic of surveillance capitalism.
Google became its pioneer, experimenter, lead practitioner and diffuser, as the Ford Motor Company had been a century earlier for mass-production-based managerial capitalism. Its invention of targeted advertising using automated auction methods paved the way to financial success at a critical time in its development. Each Google web search produced not only a search result but an unnoticed trove of collateral data, previously discarded as “digital exhaust”, such as the number and pattern of search terms used, the phrasing of a query, spelling, punctuation, click patterns, dwell times on websites and location. These behavioural by-products of searching had been haphazardly stored and operationally ignored as accidental data caches.
But then came the idea of “data-mining” and the ground-breaking insight that stories about each Google user – thoughts, feelings, interests ‑ could be put together at high speed from all the seemingly chaotic signals that trail every online action. Yahoo and other early search engines in the 1990s had the chance to do the same but failed to recognise the gold dust in the detritus of their interactions with users.
In Google, the behavioural surplus left behind by searchers was quickly reimagined as a critical element in transforming the ailing company at a time when investors threatened to withdraw support. Google could now recycle this surplus in a continuous reflexive process of machine learning and improvement. In the early days, behavioural data were put to work entirely on the user’s behalf, providing extra search value at no extra cost to users. But now they were to be used to create an entirely different product, to make targeted advertising as relevant as possible to each individual user. This was a crucial mutation for the company and it quickly turned the internet, and the very nature of information capitalism, towards an astonishingly lucrative surveillance project. The data market in effect has become the engine of the internet and the privacy policies we agree to but don’t fully understand, because they are long, verbose, opaque and full of legal jargon, help fuel it.
In the online auction system, the key ingredient is Google’s estimate of the likelihood that someone will actually click on an ad. The charge for an advertiser to use Google (and later, Facebook and others) is based on leading-edge algorithmic programmes that produce powerful predictions of users’ online behaviour, constantly improving with ever more vast supplies of behavioural surplus. Google can now promise to deliver a particular message to a particular person at just the moment when it has a high probability of actually influencing purchasing behaviour. And it can do this in the fraction of a moment that it takes to click through a web page.
Google learned to accommodate simultaneously what is now trillions of automated auctions. The demand for economies of scale soon pushed beyond search pages, so that the entire internet became a canvas for Google’s data extraction and analysis. By 2010, after only six years of fine-tuning, its automated approach to online surveillance at scale was generating annual revenues of $10 billion. It was now operating as a one-way mirror, with secret access to vast amounts of behavioural data, beyond users’ awareness or consent, from which to infer thoughts, feelings, emotions and intentions.
Zuboff dissects the evolution of Google through an impressive range of sources, including current and former employees and patents filed to protect algorithms. To achieve its historically unprecedented level of surveillance, it was necessary to suspend the principles underpinning its founding vision: the initial promise of information capitalism as a liberating and democratic force. It was now using methods specifically designed to override users’ privacy and decision rights, gaining new powers to follow people across the internet and around the world, taking personal information without asking and using it as a unilaterally claimed resource to work in the service of third parties.
This new logic of surveillance spread to Facebook in 2008 when Google executive Sheryl Sandberg was hired to lead its transformation from a social networking site to an advertising giant. “We have better information than anyone else,” she said. “We know gender, age, location, and it’s real data as opposed to the stuff other people infer.” Facebook’s user culture of intimacy and sharing would yield a new level of surplus at enormous scale. It could be tracked, scraped, stored and analysed without requiring people to voluntarily share intimate personal information with the company.
Success came more quickly, and more easily, than anyone could have predicted. Facebook mushroomed into a vast digital apparatus with world-historic concentrations of advanced computational knowledge about users, bringing vast wealth to its owners. Its new goal was to use this knowledge to create near-perfect prediction from ubiquitous behavioural surplus supply chains. As with Google, success in accurate targeting became more valuable as predictions approach certainty and supply chains for personal data across the internet were multiplying by the week, especially from a plethora of new apps.
Privacy and personal control of one’s own information are still the crucial issues. Most mobile health and fitness apps, for instance, are not subject to health privacy laws. Companies are expected to self-regulate. Yet a recent legal review concludes that most of them take the user’s private information without permission and do not disclose to the user that their personal data will be sent to advertising companies, who can then share it with other third parties. A 2016 study in the Journal of American Medicine identified over 200 diabetes apps that can modify or delete one’s personal information (64 per cent), read a phone’s status and identity (31 per cent), gather location data (27 per cent), view one’s Wi-Fi connections (12 per cent). Some can activate a phone’s camera and access photos and videos. A leak from Facebook in 2017 revealed its system for gathering “affective surplus” about Australian teenagers’ mood shifts over a weekly cycle, being able to pinpoint the exact moment, from pictures, posts and other interactions, exactly when a “confidence boost” of advertising cues and nudges could be successfully applied.
Both facial recognition and speech analysis are at the cutting edge of the frenzied competition to find new ways to render private information into analysable data. Privacy advocates like the Centre for Public Integrity are fighting back, pushing for laws that grant individuals the right to sue a company for unlawful rendition. Facebook has a unique competitive advantage in facial recognition, with two billion users uploading 350 million photos every day. In 2018, its researchers announced their automated system was able to recognise faces “in the wild” with over 97 per cent accuracy, using “deep learning” based on very large training sets. With its massive investment in lobbying operations at state and federal levels, the company has successively turned back legislative privacy proposals in half a dozen states in the US. Advocacy groups argue that it must be the individual alone who decides which personal experience is rendered into data, the purpose of which should be to enrich the person’s life. There can be rendition in our digital future without surveillance, they insist, if the user is put at the centre of decision-making on personal data and is not always relegated to be a means to others’ ends.
Others too have been raising concerns about data-harvesting and privacy. Google’s Sidewalk Labs plans to build a “smart city” in Toronto, but the plans are coming under increasing scrutiny amid concerns over privacy and data-harvesting. One American venture capitalist wrote to the city council that Google could not be trusted to safely manage the data it collects. The smart city, he said, “is the most highly evolved version to date of … surveillance capitalism”, suggesting Google will “use algorithms to nudge human behavior” in ways favourable to its business. “It is a dystopian vision that has no place in a democratic society.” The co-founder of Blackberry called the project “a colonizing experiment in surveillance capitalism attempting to bulldoze important urban, civic and political issues”. The former privacy commissioner of Ontario wrote in her resignation letter: “I imagined us creating a Smart City of Privacy, as opposed to a Smart City of Surveillance.” She had been told that all data collected would be wiped and unidentifiable, that Google was committed to embedding privacy in every aspect of the project. But then she learned that third parties would be able to access identifiable information gathered in the Smart City.
Virtual assistants, using human language to control computers, smartphones and internet devices in homes and offices, will probably be the next big consumer technology market, and there is a fear that information gathered in by voice in the hands of a tiny clique of big companies could erase what is left of online privacy. Google now is close to having one billion assistants installed on devices and Amazon has more than 100 million versions of Echo/Alexa operating. As with operating systems and browsers, these have not attracted any significant scrutiny from government regulators.
Facebook, Google, Amazon, Apple, Microsoft and China’s Baidu are racing to gain advantage in rendering private speech into digital surplus and improve the ability to understand complex conversations. Microsoft experiments with volunteers speaking in the home setting of mock apartments in cities around the globe. Baidu collects speech in every dialect in China, to teach their computers how to render more surplus from the structure of speech and how we say something: vocabulary, pronunciation, intonation, cadence, inflection, dialect and other variables in human language.
Complaints from consumer groups focus on internet-enabled dolls that systematically prompt children to submit a range of personal information, including location. Germany’s Federal Network Agency in 2017 became the first regulator to move against collecting underage behavioural surplus, barring one toy as an illegal surveillance device. Meanwhile, Amazon is making deals with builders to embed its Alexa system into new homes, to integrate with door locks, light switches, security systems, door bells, thermostats and other “smart” devices linked in the Internet of Things (IoT). Its forward-looking patents include development of a “voice-sniffer” algorithm able to respond with instant product offers to “hot” words.
The future of surveillance capitalism depends largely on governments’ lack of interest (excepting China, of course) in regulating for users’ control over their personal data. So it was perhaps inevitable that the tech giants would turn their attention to political persuasion. In 2010, Facebook began its series of experiments (now numbering in the hundreds) to see if micro-targeting individual newsfeeds could alter users’ feelings about political candidates and how they would vote.
The first experiment to come to public attention was a controlled, randomised study, the results published in a reputable academic journal. The problem was that none of the 61 million people used were aware they were being manipulated by a hidden hand. Facebook calculated that by manipulating newsfeeds, they succeeded in sending 60,000 additional voters to the mid-term polls and influencing the vote of 280,000 people through a “social contagion” effect that altered mood and behaviour with subliminal cues. The article generated academic outrage for ignoring standard ethical protocols (transparency, avoidance of harm, informed consent, allowing participants to opt out). The worry was that if Facebook could tweak emotions and make us vote in a particular way, what else could it do? “The Facebook study,” wrote The Guardian, “paints a dystopian future in which academic researchers escape ethical restrictions by teaming up with private companies to test increasingly dangerous or harmful interventions.” But Facebook was well on its way to deeper involvement in politics.
The Cambridge Analytica scandal, where Facebook’s gigantic data set was used to swing the Brexit vote and the election of Trump in 2016, has been well-explored elsewhere and is discussed in some detail by Zuboff. The political goal was to match, at great speed and at great scale, the most effective method of persuasion with voters’ personality type. Each “type” was gleaned from analysis of close to 5,000 data points harvested from interactions with the social media platform. Zuboff is in no doubt that manipulation of the electoral process will deepen and intensify, as improvements in artificial intelligence (AI) and as new streams of personal data become available through ubiquitous computing and IoT. As one Silicon Valley executive said: “There’s all that dumb real estate out there and we’ve got to turn it into revenue. The IoT is all push, not pull. Most consumers do not feel a need for these devices … The bottom line is that the Valley has decided that this has to be the next big thing so that firms here can grow.”
Projects well in development include internet-enabled fabric with complex sensors woven into textile, like the “interactive denim” being developed by Levi Strauss with the aim of “seeing” the physical context of the body and “detecting and deciphering gestures as subtle as the twitch of your finger”. A Silicon Valley executive declared: “It’s manifest destiny. Ninety-eight percent of the things in the world are not connected. So we’re gonna connect them. It could be a moisture sensor that sits in the ground. It could be your liver. That’s your IoT. The next step is what we do with the data. We’ll visualize it, make sense of it and monetize it. That’s our IoT.”
Connected cars can deliver driver data to third parties who can figure out where we are, where we’re going and what we want. Google and Amazon are locked in competition for control of the dashboards of connected cars and the speech that hovers around them. Insurers are interested in drivers’ performance data, whether they rapidly accelerate or drive at high speed, how hard they brake, how rapidly they make turns and whether they use the turn signal. Plans include using digital surplus to trigger automated punishments for bad behaviours, such as rate hikes, financial penalties, curfews, engine lockdowns. Car hire companies want to remotely disable a vehicle when repayments are overdue and track a car in real time so a repo man can recover it.
Zuboff urges critical opposition to the techno-utopianism coming from Google and Facebook’s “applied utopistics”. Both are constructing a prototype of an instrumentarian power, showcasing feats of behavioural engineering such as “emotion analytics” that exploit the human inclination toward empathy, belonging and acceptance. She presents a penetrating analysis of BF Skinner’s work on behaviour modification and his instrumentarian vision of the future. This is now shaping the utopian vision that every person, object and process can be corralled into data supply chains that feed AI-powered machines to manage and monetise our frailties.
But beyond academic criticism, what legislative scrutiny is there of the arrival of new technologies that allow a handful of private companies to acquire such power at a mass scale? Surveillance capitalists have thrived in the total lawlessness of cyberspace and relentlessly fight any attempt to constrain the rendition of personal data harvested from use of the internet. The ferocity with which they claim their right to rendition is clear evidence of its core importance in the economic logic of maximising surveillance revenues.
Some attempt to reduce the lawlessness of cyberspace can be seen in the attempts by American (FTC, FCC) and the European Union (GDPR) regulators to push back against monopolies that grab power over privacy. The problem is that the pervasive rhetoric of the inevitability of digital omniscience has become a full-blown ideology inside the tech industry, a Trojan horse for powerful economic imperatives. Democratic Party aspirants to the White House in 2019 are placing this on the political agenda for the next election. Some of them advocate breaking up big tech companies, listening to critics’ arguments that for democracy to thrive we need more agency in our lives, not less.
While the current legislative emphasis is on monopoly power that reduces competition, monopoly power also allows a company to deliberately degrade its service below what a competitive marketplace would allow. Facebook launched its social network promising, as an early competitive advantage over rivals, that it would not collect private information. But Google exited from the social media market and then Facebook acquired Instagram and WhatsApp, cornering the social media market. It immediately revoked its earlier promise and changed its privacy pact with users. The ubiquitous tracking of users online, across more than eight million websites and apps (tracking that goes on even after users have chosen to leave Facebook) provides digital exhaust for deducing a wide range of personal attributes that people would typically assume to be private, including one’s sexual orientation, religious and political views, current state of (un)happiness, level of intelligence, parental separation, gender, personality traits, use of addictive substances, personal phobias and so on. A-I powered systems run by companies (US) or the government (China) compute millions of profiles without an individual’s consent or indeed awareness of the manipulation taking place behind the scenes.
GDPR went into effect in Europe in 2018, establishing several privacy rights that do not exist in the US, including the requirement that tech companies must receive explicit permission from users before collecting personal information. Under a new Californian law that comes into effect in 2020, users will have the right to access data that has been harvested and to block any sale of data to third parties. But a progressive state law like this can be pre-empted and nullified by weak federal law. And this is now a real danger.
Despite the received wisdom in Silicon Valley that the tech industry is best left to its own devices without the blundering interference of a clueless Congress, several current bills show a new-found sophisticated understanding of the dangers posed by a major sector of the economy that has proved itself incapable of self-regulation, but yet has ambitions to develop, outside legal limits, an IoT world filled with billions of sensors. Some bills offer weak privacy protection and would gut the rights provisions the California law already protects, including the right to sue tech companies. But other bills go further in meeting consumer protection rights, including a ban on conducting experiments without users’ consent (like the Facebook experiment in 2014), protecting children from the cultivation of compulsive usage, and testing AI systems for in-built biases like racial discrimination.
Designing good law is one thing, but getting action is quite another. Zuboff explores the frustrated case of Belgian mathematician and data protection activist Paul-Olivier Dahaye, who set out in 2017 to take a bottom-up investigative approach to uncover the secrets of Cambridge Analytica’s illegitimate means of influencing voters. What did Facebook know about him in these rogue data operations and which websites had they used? Facebook allowed him no access to its exclusive “shadow text”, in which behavioural surplus is queued up for manufacture into prediction products. Appeals to the Irish Data Commissioner (who oversees GDPR) also proved unhelpful.
Facebook says it may now move its social media platform to a WhatsApp model, where end-to-end encryption would make it closed to scrutiny by regulators, researchers and journalists. Until 2018, 1.5 billion Facebook users, including those in Africa, Asia, Australia and Latin America, were governed by terms of service issued by Facebook headquarters in Ireland, terms subject to the new EU framework. In April 2018, it quietly issued new terms of service, placing those 1.5 billion users under US privacy law, thus eliminating their ability to file claims in Irish courts and ensuring that GDPR could not circumscribe its operations.
Zuboff has a good detective’s instinct for investigating the detailed moves of big tech companies famous for their secretiveness. But she is also a philosopher and social psychologist who can give intellectual depth to a subject that remains completely invisible to most of us. Though we spend large portions of our lives online, we are blissfully unaware of our loss of informational privacy. Zuboff argues cogently that this new form of capitalism is as significant to human nature as industrial capitalism was to the natural world in the nineteenth and twentieth centuries. Her manifesto is a call to throw off the helpless feeling that it is all somehow inevitable. We need a sense of outrage at the way the internet has developed thus far in its short history, so that we can resurrect its early promise to enrich human lives and not allow our digital footprint to be exploited by people more interested in ignoring, overriding and displacing everything about us that is personal.
Farrel Corcoran is Professor Emeritus, School of Communication, Dublin City University. He is a former Chairman of RTE.