Computational methods and the possibilities of science and technology studies

Juan Pablo Pardo-Guerra


Science and technology studies are often characterized as being deeply skeptical of quantification. This is perfectly sensible: in addition to the overall critical stance of our field, numbers play an integral role in reinforcing structures of inequality and difference in various sociotechnical domains, as documented in the work of various scholars. 

Our position on numbers and quantification isn’t limited to our theoretical repertoires but also bleeds into our methods. A quick survey of most leading journals in STS shows a clear preponderance of qualitative, historical, narrative, ethnographic approaches, with quantitative methods appearing only sporadically, if at all. We are habituated into a distaste for numbers and their correlates—statistical models, where the complexity of social life is transformed into associations between variables that will for always be imperfect representations of the world.

This taste for the qualitative is seen, too, in how science and technology studies engaged with the growth of computational methods in social research. STS scholars have produced commendable works on data, algorithms, surveillance, quantification, scores, and a panoply of phenomena associated to the rise of “the digital”; they have benchmarked algorithms, identified biases, opened the black boxes of computational social science. As they should. And in most cases, they have done so from a qualitative stance. The quantifying instruments of computational methods—topic models, word embeddings, and large language models to name but a few—remain peripheral to the craft of the STSer, awkward instruments that they may use but only when wearing another disciplinary hat.

This methodological exclusion limits our ability to understand the world and act critically when facing novel and emerging challenges. The reason is simple: our taste for the qualitative reduces the pluralism of our methodological toolkits and occludes forms of evidence that may be important points of discussion and of theoretical elaboration. 

This does not mean that we should simply mimic how other social scientists have used quantitative methods, nor renounce our humanistic traditions in the pursuit of some false sense of “objectivity” in numbers. Rather, it involves creatively bricolaging elements and techniques from the repertoires of computational analysis for use in our own disciplinary matrix. Instead of passively adopting, it calls for actively reinventing, reinterpreting, and re-embedding computational techniques in our craft.

Recently, I had the opportunity to pursue a research project along these lines, combining methods developed in computational social science and more established qualitative techniques of oral histories in order to explore a matter that is central to our STS sensibilities: how knowledge changes. The project, in particular, involved studying the association between research assessment exercises and the forms of knowledge that get published, rewarded, and produced in the British social sciences. This is a topic close to many of our hearts—these evaluations (known as Research Assessment Exercises or, more recently, as the Research Excellence Framework) are central to the experience of most British academics, shaping their careers and ways of engaging with the institutions where they produce knowledge about the world. 

Soon to be published by Columbia University Press with the title The Quantified Scholar, the project involved studying how the publications of social scientists changed throughout the last three decades, as research evaluations became an institutionalized feature of British academia. Introduced in 1986 as a means for dealing with the increased austerity of the UK’s higher education sector, the research evaluations that I study in the book have assessed the intellectual products of British academics to distinguish centers of disciplinary “excellence” which can putatively make better use of scarce research funding. Importantly, and unlike evaluations elsewhere, the ones we see in the UK are particular: they are centralized and centralizing (they wholly depend on the state’s apparatus to work); they are comprehensive, covering the vast majority of institutions of higher education; and they are disciplinary, focusing on how fields are practiced in units rather than the competencies of individuals. 

Much has been written about these research evaluations. Notable examples include Derek Sayer’s Rank Hypocrisies and, of course, the foundational contributions of Marylin Strathern’s now classic Audit Cultures. Yet despite our collective analyses of these research evaluations, we still lack evidence on their effect on knowledge and, in particular, on the way they shape disciplines over time. Scholars in the domain of science policy have provided some important findings: it is clear, for example, that researchers shift their focus and efforts along the incentives that are built into evaluations. How these redirections concatenate to transform the field remains, to an extent, an open question. 

One of the clear challenges in tackling this is assessing disciplines in their entirety. Macro entities such as these—containing various traditions, groups, institutions, paradigms, and importantly, thousands of practitioners—are difficult to access empirically, particularly through the traditional methods of our craft. We can certainly study economists through their historical records and through detailed conversations and observations in the field, but this will unlikely represent economics in a more general sense. 

To deal this problem, I used computational methods as a starting point for theoretical elaboration in The Quantified Scholar. Integrating insights from Michael Burawoy’s extended ethnographic case method with the affordances of computational methods of text analysis, I combined quantitative descriptions of structural changes in social science disciplines over time with oral histories about the experience of being evaluated. The first step, for example, involved compiling a census of social scientists in Britain since the 1980s. Lacking a central repository with this information, a team of research assistants and I used the bibliographic records of the Web of Science to identify scholars and their affiliations, tracking how they changed over time. The outcome of this was a longitudinal dataset that represented the careers of nearly 14,000 social scientists across anthropology, economics, politics, and sociology from the 1980s to 2018. 

The data from Web of Science also allowed us to create two computational measures that captured, albeit imperfectly, the organization of knowledge between researchers and units. Using topic models—an increasingly common computational approach for classifying texts—we could compare the similarities between the “topicality” of scholars and their immediate institutional peers (that is, of the work of a sociologist and that of colleagues employed at their institution), as well as the similarities between institutions within a discipline. The two measures developed out of these techniques—similarity (capturing how similar a scholar’s work is to that of their immediate colleagues) and typicality (representing how typical an institution is within the broader discipline) allowed us to study careers in terms of structures of knowledge across the institutional spaces inhabited by academics—that is, in their units, universities, and disciplines. Importantly, they allowed us to evaluate their bearing on one key career outcome: mobility. If knowledge is the product of embodied labor, then studying the movements of scholars across institutions should provide us a sense of how knowledge and its disciplinary matrix changed over time. 

Diagram, engineering drawing

Description automatically generated

The results of this quantitative analysis were telling: a statistical model suggested that scholars who are more similar to their peers are also more likely to change jobs; the model also suggested that those in more typical departments are less likely to experience mobility between evaluation periods. This hinted a patterned series of movements in the occupational space: scholars go from more to less similar positions, and from less to more typical departments, creating a pressure that makes fields more homogeneous, more isomorphic. Indeed, by analyzing the “degree of thematic difference” between academic units, our data suggested that the diversity within all social science disciplines in the United Kingdom fell in the past 30 years. Units, throughout the period of research evaluation, became ever more similar.

Chart, line chart

Description automatically generated

These quantitative results were, as any STS practitioner knows, artifacts of various interwoven assumptions. Rather than approaching this quantitative analysis as if an objective account, we used it as a claim from the field. The database itself, compiled through a series of interpretative practices using the objects found in the Web of Science, was not interpreted as a representation of actual scientific practices on the ground but, rather, as a sort of map—an artificial representation that would allow us to navigate our questions in the ethnographic field.

Indeed, rather than stopping at the quantitative results (a move that would mirror the analytical strategies of, for example, classical sociological studies of labor mobility), these became pretexts for conversations with social scientists who had experienced, in different ways, at different institutions, and in different moments of their careers, the weight of the evaluations. Like the extended case method, my strategy was to use the quantitative as a point for discussion and deliberation, a prompt for ethnographic sensemaking that allowed informants to put their experiences in the context of a broader structural observation about the world (I call this approach the extended computational case method, which I have described elsewhere). 

This led to some surprising results. Against a simple understanding of research evaluations as external bureaucratic impositions, my conversations with social scientists highlighted various important contradictions. Research evaluations mattered, most noted, but indirectly. They seldom altered what people studied, rather shifting their publication strategies and ways of framing results. Evaluations, also, were not necessarily all that bad. As one interviewee noted, they were important parts of the public justification of funding in the UK, instruments for defending research in public institutions, even if at the cost of participation. These sort of negative cases, elicited in conversations that were anchored on the quantitative findings, provided a springboard for theoretical elaboration. They allowed identifying, in particular, the affinities between the way evaluations are practiced and made sense of on the ground, and the more general vocations of merit and research into which scholars are habituated in academia. These are audit cultures, but they are also cultures anchored in the ‘native’ self-images and expectancies of academics who, in seeking to produce knowledge, partly accept the assumptions of quantification. 

The conversations with social scientists inspired as many questions as they provided findings, leading to another round of computational analyses. Specifically, to answer a question raised by informants (“did knowledge actually become more similar?”), we returned to a form of computational textual evaluation known as word embeddings to track the similarity of texts over time. While topic models allowed for comparing topical distributions, word embeddings came closer to an analysis of semantic changes. The findings confirmed what we saw in the field and in the original analysis: over time, the language of social scientists became more similar (strikingly so in economics, less so in sociology and politics). 

The study, then, both confirmed existing accounts of the quantification of knowledge (that is, how marketization leads to a narrowing of epistemic concerns) but also stressed new findings (how such marketization echoes the vocational dispositions of scholars and is amplified through the structure of labor markets). And importantly, these results—so consistent with the sensibilities of science and technology studies—would have been inaccessible without the combination of computational and qualitative evidence. 

There is scope for using computational methods in or research, as I hope to have hinted at in this brief post. This does not imply abandoning our situated forms of knowledge-making, but rather to recognize that computational tools offer instruments to produce objects for discussion and deliberation—opportunities for collective sense-making that aren’t adopting a view from nowhere but, instead, that create maps for exploring and questioning structures that are otherwise inaccessible. These methods need not compete for attention with qualitative techniques but can readily become part of the epistemic scaffolds through which we build claims about the world. 

There is, indeed, some urgency in taming these methods for our disciplinary cause. With the (perhaps exaggerated) “avalanche of data” on which much of computational social science is predicated, we see, too, a growth of academic positions where computational techniques are part of the job description. Ignoring these methods, and failing to appropriate them in our terms, is partly renouncing our position at the table, giving away any chance of shaping the field in a manner consistent with our critical traditions. The continued success of science and technology studies cannot be limited to just advancing theoretical contributions that travel well into other fields but must also include engaging with methods that, as they become increasingly popular, take fully into account our critiques of data, situated knowledges, and power in their use and interpretation. This work of appropriation is something we must do proactively—and is an area where we seem to be lagging others. The two top journals of our community, Social Studies of Science and Science, Technology and Human Values, contain but a handful of papers (2) using topic models, which is now an established technique of computational text analysis, and have jointly about a dozen or so quantitative papers over the last decade. Enhancing our empirical basis through the opportunities of computational methods is a matter of keeping our voices relevant, of investing in changing disciplinary conversations in a more strategic yet epistemically inclusive way. 


Photo by Trevor Vannoy on Unsplash

Juan Pablo Pardo-Guerra is an Associate Professor at the University of California, San Diego, a founding faculty member of the Halicioğlu Data Science Institute, and Associate Director of the Latin American Studies Program at UC San Diego. Trained in Science and Technology Studies at the University of Edinburgh (2010), he is author of Automating Finance: Infrastructures, Engineers, and the Making of Global Finance (Cambridge University Press 2019) and The Quantified Scholar: How Research Evaluations Transformed the British Social Sciences (Columbia University Press, 2022).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: