top of page

A Zucking Nightmare

by Alexandre Leskanich

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power Shoshana Zuboff, Profile Books, 2019 Democracy Hacked: Political Turmoil and Information Warfare in the Digital Age Martin Moore, Oneworld Books, 2018


Think of Big Brother today and you might think of Mark Zuckerberg. Of course, you’d rather not, but he intrudes nonetheless, his features puckered in a benevolent smile, his metallic autotune emitting bursts of banality as he garners advertising revenue off the back of people’s social insecurities. A canny businessman, ‘Zuck’ hopes that the language of connection, of collective endeavour, of combined effort, speaks to his possession of a heightened social conscience. That’s even more necessary these days, as Facebook becomes an effective machine for the dissemination of political propaganda. The network’s central user-illusion – that it exists entirely to help you – is lately maintained through assurances penned by Zuckerberg himself. ‘Are we building the world we all want?’ he pondered two years ago in a public relations missive entitled ‘Building Global Community’. Even after all he’s done for us, he still wants to help us ‘come together’ to harness ‘people’s intrinsic goodness aggregated across our community.’ He’s not in it for himself, folks, he’s in it for the world. That people are not always particularly good, that aggregating goodness means aggregating an awful lot of bad as well, and that we evidently don’t all want the same world, escapes serious consideration.


Earlier this year, he was at it again with another oxymoron, his ‘privacy-focused vision for social networking’. Cue the same turgid prose, the same barely intelligible content. Zuckerberg’s prosaic, heavily-calculated statements bear out Orwell’s reminder that language is ‘an instrument which we shape for our own purposes.’ Regrettably, in pursuit of his own ends the writer becomes ‘almost indifferent as to whether his words mean anything or not.’ Sure enough, despite signs of contrition, Zuckerberg’s forays into crisis-management have reiterated the surface tenets of Zuckism: ‘inclusion’, ‘spreading prosperity and freedom’, ‘promoting peace and understanding’. The more vacuous the rhetoric, the more comprehensive the image of fatherly beneficence. Cloaked in the guise of a guru, Zuckerberg guides us towards the light of deeper connection. The opportunistic dorm-room tinkering with the bros back in 2004 led to the realisation that the human need for connection, for social validation, for recognition, could be harnessed for profit with the right advertising model. But it had to be couched in the language of lofty aims, clothed in the garb of a righteous ideal.


Suddenly, the economic will to power driving the desire to corner the market of human attention and cynically re-purpose it for ready cash is presented in the noble verbiage of ‘coming together as a global community.' That’s where all the guff about Facebook ‘bringing us closer together’ comes from: ‘connection’ becomes the rationale transcending gritty questions of morality or politics – foregoing any consideration of its efficacy as a way to achieve the change the world actually requires. That Facebook helps dismantle any collective understanding of the world we live in, and conspicuously fails to promote a shared conception of one ‘we all want’, is assiduously sidestepped.


Zuckerberg wants two incompatible things: a world in which he can continue to be Mark Zuckerberg, billionaire founder of the money- making behemoth known as Facebook, while simultaneously helping people ‘see a more complete picture, not just alternative perspectives’ of the world mediated through it. These are mutually exclusive aims. For one thing, because the world does change no ‘complete picture’ of it is possible. For another, thanks to Facebook’s professed desire to change the world through connecting people, the opinions it enables them to indiscriminately trumpet into the ether diminishes understanding of the world common to everyone.


Overwhelmed by information of indeterminate value, ‘understanding’ dwindles into confusion as the virtual and the actual chaotically converge. Yet although acknowledging that the ‘diversity of ideas’ and their proliferation afforded through social media fragments ‘our shared sense of reality’, Zuckerberg insists that ‘by increasing the diversity of our ideas and strengthening our common understanding, our community [Facebook] can have the greatest positive impact on the world.’ That a diversity of ideas, desirable in principle, is inimical to common understanding in practice, only temporarily registers.


The most comprehensive attempt yet made to conceptualise the new, Zuckified reality emerges in Shoshana Zuboff’s The Age of Surveillance Capitalism, which grew out of a 2015 Journal of Information Technology article called ‘Big other: surveillance capitalism and the prospects of an information civilization.’ A professor emerita at Harvard Business School whose work has previously examined the impact of information technology in the workplace, Zuboff expands the key arguments of that article in exhaustive detail to illuminate an unprecedented system of surveillance, one which rests upon the collection of data about your online activities in order to predict and modify your future behaviour. Holding us spell-bound, the attention merchants of the digital age compile vast databanks of information about us that can be sold or traded in what she calls ‘behavioural futures markets’. Hence human experience (now that much of it plays out online) is being expropriated to serve a new form of capitalism that departs from the organic reciprocities of earlier, industrial iterations. In its departure from relations based upon the economic exchange of goods and services, surveillance capitalism’s ‘radical indifference’ to the individuals it mines for information stems from the fact that it doesn’t actually ‘rely on people as consumers’, but rather as ‘users’ who become re-mouldable ‘sources of raw material.’ Even the digital products people individually purchase become ‘hosts for surveillance capitalism’s parasitic operations.’


Zuboff’s timely book highlights that the shift to digital surveillance has become one of the defining features of contemporary culture. In 2006 the BBC reported on Britain’s thriving ‘surveillance society’, with its millions of CCTV cameras and pervasive ‘dataveillance’ tracking our financial transactions. Certainly, the longstanding anxiety about state surveillance conducted in secret still endures as widespread monitoring of telecommunications by security agencies continues. Now more than ever, the privacy and freedom citizens enjoy emerge as temporary privileges rather than inalienable rights, easily revoked with the right equipment. But our iconography of surveillance is becoming outdated – encapsulated in the hidden microphones, concealed cameras, roving police patrols and George Smiley-style tradecraft of the analogue age. Now, with the digital surveillance of people’s browsing habits, with every click recorded for posterity, there’s evidently extraordinary scope for the manipulation of behaviour. Transforming people into sources of data has sparked a race to design ever more intrusive applications so that their users can be fully ‘automated’. Marketed as convenient, time-saving devices, ‘smart’ gadgets (including Alexa, Fitbit and Amazon Echo) accrue more and more data about us, enabling corporations to better shape the behavioural path of our future selves.


Traditionally, surveillance has largely been about keeping people in check. The idea of being constantly watched emerged to spectacular effect in theism, with God able to observe the slightest variations in human thinking, and hence capable of convicting people of thought-crime: ‘neither is there any creature that is not manifest in his sight’ (Hebrews 4:13); ‘his eyes behold, his eyelids try, the children of men’ (Psalms 11:4). This proved useful in moderating moral behaviour, making some people better able to maintain power and control over others. But a world lacking a permanent transcendental authority has still made surveillance a ubiquitous, inescapable feature of daily life, an ordinary consequence of doing practically anything.


Similarly, proposing that round-the-clock supervision could constrain action, philosopher Jeremy Bentham’s panopticon was a cunning piece of prison-engineering designed to achieve the maximal visibility of its incarcerated inmates in the hope that their behaviour would, under conditions of total scrutiny, conform to the social standards of the age. Today tech companies don’t much care what you do or think, so long as they can manipulate it to profitable effect. Once hooked up to the internet, the maturation of society into a click-driven soup of ‘unsocial sociability’ – Kant’s term for the antagonism our simultaneous need and distaste for social concord incurs – merely portends the further exploitation of its members as lucrative revenue streams. Whether or not they are morally improved as a result is entirely moot. As Zuboff remarks, the aim is ‘not to impose behavioural norms, such as conformity and obedience, but rather to produce behaviour that reliably, definitively, and certainly leads to desired commercial results.’ Hence the ‘ubiquitous computing’ on which surveillance capitalism relies aims for, and necessitates:


the everywhere, always-on instrumentation, datafication, connection, communication, and computation of all things, animate and inanimate, and all processes – natural, human, physiological, chemical, machine, administrative, vehicular, financial. Real-world activity is continuously rendered from phones, cars, streets, homes, shops, bodies, trees, buildings, airports, and cities back to the digital realm, where it finds new life as data ready for transformation into predictions.


As Zuboff emphasises, it’s all about ‘predicting us, without actually caring what we do or what is done to us.’ What follows is the banality of equivalence – from the point of view of surveillance capitalism, every click is as good as any other. The crucial discovery that the ‘behavioural surplus’ produced as a by-product of user interactions could be commercialised soon supplanted Google’s early business model, in which only the behavioural data needed for service improvements was ‘reinvested in the user experience’. Google worked out, to pioneering success, that they could deduce users’ interests from online behaviour, and then use the data they collected to tailor online experience by targeting users’ with ads to suit those interests.

Astoundingly, its access to that data made it possible ‘to know what a particular individual in a particular time and place was thinking, feeling, and doing.’ Hoping to capitalise, Facebook accordingly took the extraction and exploitation of data to new levels, enticing people to share personal information with attractive features like the ability to contact friends and ‘like’ content, and in doing so to actively participate in their own instrumentalisation.


This ‘digital dispossession’ occurs with each morsel of human behaviour that becomes datafied and commodified: we’re constructing searchable lives in which everything we’ve ever done becomes a usable resource for someone else. Although it required the pre-existence of an ‘internet of things’, Zuboff reiterates that surveillance capitalism is neither a ‘necessary expression of information capitalism’, nor ‘an inherent result of digital technology’, but a deliberate commercial construct. It is ‘a logic that imbues technology and commands it into action.’ Although often framed as inevitable outcroppings of information technology, Google and Facebook merely reflect the economic imperatives of their developers, not irreversible steps in a determinate process of technological advancement. Technology doesn’t have agency, but the way it is designed increasingly dictates what users will do or think: people can implement it to alter desires for their own ends, to try and pre-empt human behaviour so completely that nothing we do is ever unanticipated. However, the ability to use technology in this way depends upon significant asymmetries in knowledge and power to begin with.


In Discipline and Punish: Birth of the Prison, Michel Foucault describes the shift from sovereign power to the disciplinary power of social institutions, which wielded knowledge to reach new heights of behavioural refinement in the twentieth century. Yet Gilles Deleuze, in his prescient ‘Postscript on the societies of control’, articulates a transition away from ‘disciplinary societies’ with their regimented ‘environments of enclosure’ (the family, the school, the hospital) to outright ‘societies of control’. Here the corporation can universally modulate human behaviour to suit commercial objectives. Now, ‘one is never finished with anything’, since ‘the man of control is undulatory, in orbit, in a continuous network.’ This form of capitalism, rather than existing for production, primarily wants to ‘sell services’ and ‘buy products.’ Control is ‘short-term and of rapid rates of turnover, yet also continuous and without limit.’


Surveillance capitalism, whose primary product is data, is the society of control nearly perfected: the continuous tracking of, and complete access to, the behaviour of billions, whose experiences can be extracted to feed the logic of what Zuboff calls ‘instrumentarian power’. The latest boon to the aspiring autocrat of the future is facial recognition technology, a gizmo capable of identifying you whether you like it or not. Capitulation to all this doesn’t even require coercion: it’s designed in such a way that people are glad of their chains, willing to forego control of their data in return for the addictive, ‘free’ services and information online platforms provide – particularly the ability to curate an online existence accessible everywhere, at every moment.


Yet the technologies that reconstitute our social behaviour are not neutral, benign aids to self- development. Not only do they hamper the actualisation of a life not hijacked by corporate interests, but, as Martin Moore, a senior research fellow in the Policy Institute at King’s College London points out in his pithy book, Democracy Hacked, they can be used to the detriment of the very democratic ideals the surveillance capitalists claim to promote, and that their platforms facilitate. As democratic politics lurches from one unforeseen mess to another, we observe that our ‘open’ and ‘connected’ world is concurrently a world in which more and more happens. Consequently, the volume of online information of dubious provenance is exhausting our already over-strained attention-spans, which are becoming ever narrower: ‘Twitter storms’ now trend, apparently, for under 12 hours, down from an average of 17.5 hours back in 2013. More information means less ability to process it (let alone investigate its accuracy), leading to more frenzied browsing to find new attention-grabbing content.


Moore pinpoints what he calls an ‘unsustainable discrepancy between our capacity to represent ourselves and the ways in which we are represented in democratic politics.’ Depending on how governments respond to the political disruption the digital platforms enable, he anticipates the splintering of democracy down three potential pathways. One possibility is the emergence of ‘platform democracy’, in which unregulated digital platforms will accrue even more power, becoming ‘gateways not just to commercial services, but to public services like healthcare, education and transport.’ Instead of a jackboot stamping on a human face forever, imagine an enormous ‘like’ button constantly cajoling us to click it. In the second scenario, ‘surveillance democracy’, the state will accord itself greater powers of influence over the lives of citizens, directing their behaviour in ways it desires (one might point to the British government’s forthcoming requirement that individuals verify their age before being granted access to pornography). The third option echoes Zuboff’s plea to defend ‘the sanctity of the individual’: Moore suggests that if we’re to create a future ‘digital democracy’ less susceptible to perversion by malign interests, we’re going to need ‘a digital sphere that is less centralized, digital civic spaces and public services that do not rely on personal data tracking and ad tech, and a digital democracy that starts ... with citizens at its centre.’ This is a nice aspiration, but as Moore’s own book shows, the obstacles to its realisation have already emerged. In this instance, knowledge has arrived too late – Google and Facebook have grown into transnational corporations defying the capacity of our antiquated legal and ethical apparatus to administer them.


But we must try. One place we could start, as Elizabeth Warren has set out as part of her pitch for the US presidency, is to curb the power of the big tech companies by breaking them up, with the expectation that, once faced with more competition, they’ll be incentivised to protect our privacy. It’s a typically technocratic solution that even if implemented will leave bigger questions about the legitimacy of the new online landscape unanswered. Worryingly, the loss of faith in representative democracy, abetted by the rise of social media, has exposed just how epistemically ill-equipped we are to understand what we truly want, let alone who we can trust to achieve it. Always fragile, now the complexity of digital life actively conspires against democracy as a viable ideal, since decision-making increasingly depends upon a level of technical knowledge and expertise we have neither the time nor opportunity to obtain.


Fundamentally, democracy is vulnerable because we are overwhelmingly reliant on other people for knowledge. We know what we do largely because we have sought out informants in whom we place our trust. But with our attention increasingly temporary, we’re easily fooled: if we like how they sound and how they make us feel, they’ve got us, regardless of their reliability. So in facilitating the dispersal of fake news and propaganda, in giving a voice to every dissembling charlatan, huckster or authoritarian hoping to attain power, tech platforms degrade our capacity to discern which informants are credible. Lacking a shared standard of either truth or falsehood on which any consensus about what counts as knowledge relies, and hence unable to distinguish knowledge from its appearance, or important information from irrelevant rubbish, what’s left is the reversion of politics to group- think, with its simple soundbites and strawmen. This condition of amplified ignorance leaves us vulnerable to unscrupulous shysters, for whom the key to success is the subordination of truth to psychology and the politics of emotion.


With people presented as dehumanised drones, their sense of reality mediated through algorithms, the numbing sense of catastrophe Zuboff induces results in a vision of life digitally dominated. But there’s another side to it, to which she pays insufficient attention. What of the new possibilities for individual and collective agency social media affords, or the means of expression and solidarity it offers the dispossessed? These capacities are not merely illusory: political movements (from the Arab Spring, to Black Lives Matter, to Extinction Rebellion) have discovered in digital platforms the means to rally and organise support with remarkable speed and frequency. Moreover, why should people keep outsourcing knowledge to, and reposing authority in, a political class that appears to care very little about them? That has, over the last forty years, failed to realise the promises of future happiness and wellbeing they keep extending? That has allowed, even enabled, a tiny proportion of the global population to commandeer the wealth of nations? Indeed, why shouldn’t citizens, thanks to the unprecedented tools the digital age offers, bypass their political representatives altogether and reason in their own voices? Wouldn’t this even imply that the big tech companies have enabled what representative democracy has by design always stifled, namely the power of people to reason and decide for themselves?


If only it were that simple. Kant thought that only the free exercise of public reason could lead to enlightenment, but now here we are, with people everywhere exercising their right to free expression, with little in the way of enlightenment to show for it. It’s become clear that reasoning is not enough: we can reason all we like, but if we don’t know very much the gap between reasoning and reality will only widen. Without knowledge of verifiable quality, reason will betray us. Events in 2016 showed that there’s a startling mismatch between what people know and the gravity of what they’re called upon to decide – so far as the Brexit debacle goes, it remains more than a little disturbing that so many were opposed to something about which they knew so little (including the fantasising politicians who orchestrated the Leave campaign). The unwelcome reality is that an amplified ability to reason publicly turns out to be perfectly compatible with the growth of hatred, ignorance, disinformation and political instability. How to address this dilemma while cleaving to ideals of autonomy and free speech will be one of the central tasks of the future. Still, knowledge will only get you so far, and facts don’t tell you what to do with them. In the end, the constant difficulty is how to judge, given the limited knowledge we have, what we ought to do.


Zuboff is undeniably right:


It is not OK for every move, emotion, utterance, and desire to be catalogued, manipulated, and then used to surreptitiously herd us through the future tense for the sake of someone else’s profit.


Now that we know the system we’re being nudged to serve, what’s at stake, she insists, is nothing less than the ‘human expectation of sovereignty over one’s own life.’ Quite so, but in what does this sovereignty consist? Through what political system is it best attained? These questions continue to test our capacity to answer as the world shifts to a new stage on which the constant battle over the control of ideas will be fought.


ALEXANDRE LESKANICH is a copy-editor at Evental Aesthetics, and reads for a PhD in Comparative Literature and Culture at Royal Holloway, University of London, focusing on climatic crisis and historical consciousness.


Art by Alex Haveron Jones

Comments


bottom of page