‘Surveillance capitalism’ has become a defining concept for the current era of smart machines and Silicon Valley expansionism. With educational institutions and practices increasingly focused on data collection and outsourcing to technology providers, key points from Shoshana Zuboff’s The Age of Surveillance Capitalism can help explore the consequences the field of education. Mindful of the need for much more careful studies of the intersections of education with commercially-driven data-analytic strategies of ‘rendition’ and ‘behavioural modification’, here I simply outline a few implications of surveillance capitalism for how we think about education policy and about learning.
Data, science and surveillance
Zuboff’s core argument is that tech businesses such as Google, Microsoft, Facebook and so on have attained unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data. These aren’t especially novel insights—Evgeny Morozov has a 16,000 word essay on the book’s analytical and stylistic shortcomings—but Zuboff’s strengths are in the careful conceptualization and documentation of some of the key dynamics that have made surveillance capitalism possible and practical. As James Bridle argued in his review of the book, ‘Zuboff has written what may prove to be the first definitive account of the economic – and thus social and political – condition of our age’.
Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost. Some of the most interesting documentary material Zuboff presents include precedents such as the radical behaviourism of BF Skinner and the ‘social physics’ of MIT Media Lab pioneer Sandy Pentland. For Pentland, quoted by Zuboff, ‘a mathematical, predictive science of society … has the potential to dramatically change the way government officials, industry managers, and citizens think and act’ (Zuboff, 2019, 433) through ‘tuning the network’ (435). Surveillance capitalism is not and was never simply a commercial and technical task, but deeply rooted in human psychological research and social experimentation and engineering. This combination of tech, science and business has enabled digital companies to create ‘new machine processes for the rendition of all aspects of human experience into behavioural data … and guarantee behavioural outcomes’ (339).
Zuboff has nothing to say about education specifically, but it’s tempting straight away to see a whole range of educational platforms and apps as condensed forms of surveillance capitalism (though we might just as easily invoke ‘platform capitalism’). The classroom behaviour monitoring app ClassDojo, for example, is a paradigmatic example of a successful Silicon Valley edtech business, with vast collections of student behavioural data that it is monetizing by selling premium features for use at home and offering behaviour reports to subscribing parents. With its emphasis on positive behavioural reinforcement through reward points, it represents a marriage of Silicon Valley design with Skinner’s aspiration to create ‘technologies of behaviour’. ClassDojo amply illustrates the combination of behavioural data extraction, behaviourist psychology and monetization strategies that underpin surveillance capitalism as Zuboff presents it.
Perhaps more pressingly from the perspective of education, however, Zuboff makes a number of interesting observations about ‘learning’ that are worth unpacking and exploring.
The first point is about the ‘division of learning in society’ (the subject of chapter 6, and drawing on her earlier work on the digital transformation of work practices). By this term Zuboff means to demarcate a shift in the ‘ordering principles’ of the workplace from the ‘division of labour’ to a ‘division of learning’ as workers are forced to adapt to an ‘information-rich environment’. Only those workers able to develop their intellectual skills are able to thrive in the new digitally-mediated workplace. Some workers are enabled (and are able) to learn to adapt to changing roles, tasks and responsibilities, while others are not. The division of learning, Zuboff argues, raises questions about (1) the distribution of knowledge and whether one is included or excluded from the opportunity to learn; (2) about which people, institutions or processes have the authority to determine who is included in learning, what they are able to learn, and how they are able to act on their knowledge; and (3) about what is the source of power that undergirds the authority to share or withhold knowledge (181).
But this division of learning, according to Zuboff, has now spilled out of the workplace to society at large. The elite experts of surveillance capitalism have given themselves authority to know and learn about society through data. Because surveillance capitalism has access to both the ‘material infrastructure and expert brainpower’ (187) to transform human experience into data and wealth, it has created huge asymmetries in knowledge, learning and power. A narrow band of ‘privately employed computational specialists, their privately owned machines, and the economic interests for who sake they learn’ (190) has ultimately been authorized as the key source of knowledge over human affairs, and empowered to learn from the data in order to intervene in society in new ways.
Sociology of education researchers have, of course, asked these kinds of questions for decades. They are ultimately questions about the reproduction of knowledge and power. But in the context of surveillance capitalism such questions may need readdressing, as authority over what constitutes valuable and worthwhile knowledge for learning passes to elite computational specialists, the commercial companies they work for, and even to smart machines. As data-driven knowledge about individuals grows in predictive power, decisions about what kinds of knowledge an individual learner should receive may even be largely decided by ‘personalized learning platforms’–as current developments in learning analytics and adaptive learning already illustrate. The prospect of smart machines as educational engines of social reproduction should be the subject of serious future interrogation.
The second key point is about the ‘policies’ of smart machines as a model for human learning (detailed in chapter 14). Here Zuboff draws on a speech by a senior Microsoft executive talking about the power of combined cloud and Internet of Things technologies for advanced manufacturing and construction. In this context, Zuboff explains, ‘human and machine behaviours are tuned to pre-established parameters determined by superiors and referred to as “policies”’ (409). These ‘policies’ are algorithmic rules that
substitute for social functions such as supervision, negotiation, communication and problem solving. Each person and piece of equipment takes a place among an equivalence of objects, each one “recognizable” to the “system” through the AI devices distributed across the site. (409)
In this example, the ‘policy’ is then a set of algorithmic rules and a template for collective action between people and machines to operate in unison to achieve maximum efficiency and optimal outcomes. Those ‘superiors’ with the authority to determine the policies, of course, are those same computational experts and machines that have benefitted from the division of learning. This gives them unprecedented powers to ‘apply policies’ to people, objects, processes and activities alike, resulting in a ‘grand confluence in which machines and humans are united as objects in the cloud, all instrumented and orchestrated in accordance with the “policies” … that appear on the scene as guaranteed outcomes to be automatically imposed, monitored and maintained by the “system”’ (410). These new human-machine learning collectives represent the future for many forms of work and labour under surveillance capitalism, according to Zuboff.
Zuboff then goes beyond human-machine confluences in the workplace to consider the instrumentation and orchestration of other types of human behaviour. Drawing parallels with the behaviourism of Skinner, she argues that digitally-enforced forms of ‘behavioral modification’ can operate ‘just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behaviour consistent with “correct policies”’, where ‘corporate objectives define the “policies” toward which confluent behaviour harmoniously streams’ (413). Under conditions of surveillance capitalism, Skinner’s behaviourism and Pentland’s social physics spill out of the lab into homes, workplaces, and all the public and private space of everyday life–ultimately turning the world into a gigantic data science lab for social and behavioural experimentation, tuning and engineering.
And the final point she makes here is that humans need to become more machine-like to maximize such confluences. This is because machines connected to the IoT and the cloud work through collective action by each learning what they all learn, sharing the same understanding and ‘operating in unison with maximum efficiency to achieve the same outcomes’ (413). This model of collective learning, according to surveillance capitalists, can learn faster than people, and ‘empower us to better learn from the experiences of others’:
The machine world and the social world operate in harmony within and across ‘species’ as humans emulate the superior learning processes of the smart machines. … [H]uman interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another…. In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same ‘correct’ understanding in order to construct a world free of mistakes, accidents, and random messes. (414)
For surveillance capitalists human learning is inferior to machine learning, and urgently needs to be improved by gathering together humans and machines into symbiotic systems of behavioural control and management.
Learning in, from, or for surveillance capitalism?
These key points from The Age of Surveillance Capitalism offer some provocative starting places for further investigations into the future shape of education and learning amid the smart machines and their smart computational operatives. Three key points stand out.
1) Cultures of computational learning. One line of inquiry might be into the cultures of learning of those computational experts who have gained from the division of learning. And I mean this in two ways. How are they educated? How are they selected into the right programs? What kinds of ongoing training provides the kinds of privilege to learn about society through mass-scale behavioural data? These are questions about new and elite forms of workforce preparation and professional education. How, in short, are these experts educated, qualified and socialized to do data analytics and behaviour modification—if that is indeed what they do? In other words, how is one educated to become a surveillance capitalist?
The other way of approaching this concerns what is actually involved in ‘learning’ about society through its data. This is both a pedagogic and a curricular question. Pedagogically, education research would benefit from a much better understanding of the kinds of workplace education programmes underway inside the institutions of surveillance capitalism. From a curricular perspective, this would also require an engagement with the kinds of knowledge assumptions and practices that flow through such spaces. As mentioned earlier, sociology of education has long been concerned with how aspects of culture are ‘selected’ for reproduction by transmission through education. As tech companies and related academic labs become increasingly influential, they are producing new ‘social facts’ that might affect how people both within and outside those organizations come to understand the world. They are building new knowledge based on a computational, mathematical, and predictive style of thinking. What, then, are the dynamics of knowledge production that generate these new facts, and how do they circulate to affect what is taught and learnt within these organizations? As Zuboff notes, pioneers such as Sandy Pentland have built successful academic teaching programs at institutes like MIT Media Lab to reproduce knowledge practices such as ‘social physics’.
2) Human-machine learning confluences. The second key issue is what it means to be a learner working in unison with the Internet of Things. Which individuals are included in the kind of learning that is involved in becoming part of this ‘collective intelligence? When smart machines and human workers are orchestrated together into ‘confluence’, and human learning is supposed to emulate machine learning, how do our existing theories and models of human learning hold up? Machine learning and human learning are not obviously comparable, and the tech firms surveyed by Zuboff appear to hold quite robotic notions of what constitutes learning. Yet if the logic of extreme instrumentation of working environments develops as Zuboff anticipates, this still raises significant questions about how one learns to adapt to work in unison with the smart machines, who gets included in this learning, who gets excluded, how those choices and decisions are made, and what kinds of knowledge and skills are gained from inclusion. Automation is likely to lead to both further divisions in learning and more collective learning at the same time–with some individuals able to exercise considerable autonomy over the networks they’re part of, and others performing the tasks that cannot yet be automated.
In the context of concerns about the role of education in relation to automation, intergovernmental organizations such as the OECD and World Economic Forum have begun encouraging governments to focus on ‘noncognitive skills’ and ‘social-emotional learning’ in order to pair human emotional intelligence with the artificial cognitive intelligence of smart machines. Those unique human qualities, so the argument goes, cannot be quantified whereas routine cognitive tasks can. Classroom behaviour monitoring platforms such as ClassCraft have emerged to measure those noncognitive skills and offer ‘gamified’ positive reinforcement for the kind of ‘prosocial behaviours’ that may enable students to thrive in a future of increased automation. Being emotionally intelligent, by these accounts, would seem to allow students to enter into ‘confluent’ relations with smart machines. Rather than competing with automation, they would complement it as collective intelligence. ‘Human capital’ is no longer a sufficient economic goal to pursue through education—it needs to produce ‘human-computer capital’ too.
3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions.
Moreover, researchers might shift their attention to the kind of programmable policies that are enacted in the instrumented workplaces where, increasingly, much learning happens. Tech companies have long bemoaned the adequacy of school curricula and university degrees to deliver the labour market skills they require. With the so-called ‘unbundling’ of the university in particular, higher education may be moving further towards ‘demand driven’ forms of professional learning and on-the-job industry training provided by private companies. When education moves into the smart workplace, learning becomes part of the confluence of humans and machines, where all are equally orchestrated by the policies encoded in the relevant systems. Platforms and apps using predictive analytics and talent matching algorithms are already emerging to link graduates to employers and job descriptions. The next step, if we accept the likeliness of the direction of travel of surveillance capitalism, might be to match students directly to smart machines on-demand as part of the collective human-machine intelligence required to achieve maximum efficiency and optimized outcomes for capital accumulation. In this scenario, the computer program would be the dominant policy framework for graduate employability, actively intervening in professional learning by sorting individuals into appropriate networks of collective learning and then tuning those networks to achieve best effects.
All of this raises one final question, and a caveat. First the caveat. It’s not clear that ‘surveillance capitalism’ will sustain as an adequate explanation for the current trajectories of high-tech societies. Zuboff’s account is not uncontested, and it’s in danger of becoming an explanatory shortcut for deployment anywhere that data analytics and business interests intersect (as ‘neoliberalism’ is sometimes evoked as a shortcut for privatization and deregulation). The current direction of travel and future potential described by Zuboff are certainly not desirable, and should not be accepted as inevitable. If we do accept Zuboff’s account of surveillance capitalism, though, the remaining question is whether we should be addressing the challenges of learning in surveillance capitalism, or the potential for whole education systems to learn from surveillance capitalism and adapt to fit its template. Learning in surveillance capitalism at least assumes a formal separate of education from these technological, political and economic conditions. Learning from it, however, suggests a future where education has been reformatted to fit the model of surveillance capitalism–indeed, where a key purpose of education is for surveillance capitalism.
Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His research traces the connections between educational policy, digital technologies, and practices in schools and universities.