Читать книгу Machine Habitus - Massimo Airoldi - Страница 9

1 Why Not a Sociology of Algorithms? Machines as sociological objects

Оглавление

Algorithms of various kinds hold the social world together. Financial transactions, dating, advertising, news circulation, work organization, policing tasks, music discovery, hiring processes, customer relations – all are to a large extent delegated to non-human agents embedded in digital infrastructures. For some years we have all been aware of this, thanks to academic research and popular books, journalistic reports and documentaries. Whether from the daily news headlines or the dystopian allegories of TV series, we have come to recognize that almost everything is now ‘algorithmic’ and that artificial intelligence is revolutionizing all aspects of human life (Amoore and Piotukh 2016). Even leaving aside the simplifications of popular media and the wishful thinking of techno-chauvinists, this is true for the most part (Broussard 2018; Sumpter 2018). Yet, many sociologists and social scientists continue to ignore algorithms and AI technologies in their research, or consider them at best a part of the supposedly inanimate material background of social life. When researchers study everyday life, consumption, social interactions, media, organizations, cultural taste or social representations, they often unknowingly observe the consequences of the opaque algorithmic processes at play in digital platforms and devices (Beer 2013a). In this book, I argue that it is time to see both people and intelligent machines as active agents in the ongoing realization of the social order, and I propose a set of conceptual tools for this purpose.

Why only now?, one may legitimately ask. In fact, the distinction between humans and machines has been a widely debated subject in the social sciences for decades (see Cerulo 2009; Fields 1987). Strands of sociological research such as Science and Technology Studies (STS) and Actor-Network Theory (ANT) have strongly questioned mainstream sociology’s lack of attention to the technological and material aspects of social life.

In 1985, Steve Woolgar’s article ‘Why Not a Sociology of Machines?’ appeared in the British journal Sociology. Its main thesis was that, just as a ‘sociology of science’ had appeared problematic before Kuhn’s theory of scientific paradigms but was later turned into an established field of research, intelligent machines should finally become ‘legitimate sociological objects’ (Woolgar 1985: 558). More than thirty-five years later, this is still a largely unaccomplished goal. When Woolgar’s article was published, research on AI systems was heading for a period of stagnation commonly known as the ‘AI winter’, which lasted up until the recent and ongoing hype around big-data-powered AI (Floridi 2020). According to Woolgar, the main goal of a sociology of machines was to examine the practical day-to-day activities and discourses of AI researchers. Several STS scholars have subsequently followed this direction (e.g. Seaver 2017; Neyland 2019). However, Woolgar also envisioned an alternative sociology of machines with ‘intelligent machines as the subjects of study’, adding that ‘this project will only strike us as bizarre to the extent that we are unwilling to grant human intelligence to intelligent machines’ (1985: 567). This latter option may not sound particularly bizarre today, given that a large variety of tasks requiring human intelligence are now routinely accomplished by algorithmic systems, and that computer scientists propose to study the social behaviour of autonomous machines ethologically, as if they were animals in the wild (Rahwan et al. 2019).

Even when technological artefacts could hardly be considered ‘intelligent’,1 actor-network theorists radically revised human-centric notions of agency by portraying both material objects and humans as ‘actants’, that is, as sources of action in networks of relations (Latour 2005; Akrich 1992; Law 1990). Based on this theoretical perspective, both a ringing doorbell and the author of this book can be seen as equally agentic (Cerulo 2009: 534). ANT strongly opposes not only the asymmetry between humans and machines, but also the more general ontological divide between the social and the natural, the animated and the material. This philosophical position has encountered a diffuse criticism (Cerulo 2009: 535; Müller 2015: 30), since it is hardly compatible with most of the anthropocentric theories employed in sociology – except for that of Gabriel Tarde (Latour et al. 2012). Still, one key intuition of ANT increasingly resonates throughout the social sciences, as well as in the present work: that what we call social life is nothing but the socio-material product of heterogeneous arrays of relations, involving human as well as non-human agents.

According to ANT scholar John Law (1990: 8), a divide characterized sociological research at the beginning of the 1990s. On the one hand, the majority of researchers were concerned with ‘the social’, and thus studying canonical topics such as inequalities, culture and power by focusing exclusively on people. On the other hand, a minority of sociologists were studying the ‘merely technical’ level of machines, in fields like STS or ANT. They examined the micro-relations between scientists and laboratory equipment (Latour and Woolgar 1986), or the techno-social making of aeroplanes and gyroscopes (MacKenzie 1996), without taking part to the ‘old’ sociological debates about social structures and political struggles (MacKenzie and Wajcman 1999: 19). It can be argued that the divide described by Law still persists today in sociology, although it has become evident that ‘the social order is not a social order at all. Rather it is a sociotechnical order. What appears to be social is partly technical. What we usually call technical is partly social’ (Law 1990: 10).

With the recent emergence of a multidisciplinary scholarship on the biases and discriminations of algorithmic systems, the interplay between ‘the social’ and ‘the technical’ has become more visible than in the past. One example is the recent book by the information science scholar Safiya Umoja Noble, Algorithms of Oppression (2018), which illustrates how Google Search results tend to reproduce racial and gender stereotypes. Far from being ‘merely technical’ and, therefore, allegedly neutral, the unstable socio-technical arrangement of algorithmic systems, web content, content providers and crowds of googling users on the platform contributes to the discriminatory social representations of African Americans. According to Noble, more than neutrally mirroring the unequal culture of the United States as a historically divided country, the (socio-)technical arrangement of Google Search amplifies and reifies the commodification of black women’s bodies.

I believe that it should be sociology’s job to explain and theorize why and under what circumstances algorithmic systems may behave this way. The theoretical toolkit of ethology mobilized by Rahwan and colleagues (2019) in a recent Nature article is probably not up to this aim, for a quite simple reason: machine learning tools are eminently social animals. They learn from the social – datafied, quantified and transformed into computationally processable information – and then they manipulate it, by drawing probabilistic relations among people, objects and information. While Rahwan et al. are right in putting forward the ‘scientific study of intelligent machines, not as engineering artefacts, but as a class of actors with particular behavioural patterns and ecology’ (2019: 477), their analytical framework focuses on ‘evolutionary’ and ‘environmental’ dimensions only, downplaying the cornerstone of anthropological and sociological explanations, that is, culture. Here I argue that, in order to understand the causes and implications of algorithmic behaviour, it is necessary to first comprehend how culture enters the code of algorithmic systems, and how it is shaped by algorithms in turn.

Two major technological and social transformations that have taken place over the past decade make the need for a sociology of algorithms particularly pressing. A first, quantitative shift has resulted from the unprecedented penetration of digital technologies into the lives and routines of people and organizations. The rapid diffusion of smartphones since the beginning of the 2010s has literally put powerful computers in the hands of billions of individuals throughout the world, including in its poorest and most isolated regions (IWS 2020). Today’s global economic system relies on algorithms, data and networked infrastructures to the point that fibre Internet connections are no longer fast enough for automated financial transactions, leading to faster microwave or laser-based communication systems being installed on rooftops near New York’s trading centres in order to speed up algorithmic exchanges (D. MacKenzie 2018). Following the physical distancing norms imposed worldwide during the Covid-19 pandemic, the human reliance on digital technologies for work, leisure and interpersonal communication appears to have increased even further. Most of the world’s population now participates in what can be alternatively labelled ‘platform society’ (van Dijck, Poell and de Waal 2018), ‘metadata society’ (Pasquinelli 2018) or ‘surveillance capitalism’ (Zuboff 2019), that is, socio-economic systems heavily dependent on the massive extraction and predictive analysis of data. There have never been so many machines so deeply embedded in the heterogeneous bundle of culture, relations, institutions and practices that sociologists call ‘society’.

A second, qualitative shift concerns the types of machines and AI technologies embedded in our digital society. The development and industrial implementation of machine learning algorithms that ‘enable computers to learn from experience’ have marked an important turning point. ‘Experience’, in this context, is essentially ‘a dataset of historic events’, and ‘learning’ means ‘identifying and extracting useful patterns from a dataset’ (Kelleher 2019: 253).

In 1989, Lenat noted in the pages of the journal Machine Learning that ‘human-scale learning demands a human-scale amount of knowledge’ (1989: 255), which was not yet available to AI researchers at the time. An impressive advancement of machine learning methods occurred two decades later, thanks to a ‘fundamental socio-technological transformation of the relationship between humans and machines’, consisting in the capturing of human cognitive abilities through the digital accumulation of data (Mühlhoff 2020: 1868). This paradigmatic change has made the ubiquitous automation of social and cultural tasks suddenly possible on an unprecedented scale. What matters here sociologically is ‘not what happens in the machine’s artificial brain, but what the machine tells its users and the consequences of this’ (Esposito 2017: 250). According to Esposito, thanks to the novel cultural and communicative capabilities developed by ‘parasitically’ taking advantage of human-generated online data, algorithms have substantially turned into ‘social agents’.

Recent accomplishments in AI research – such as AlphaGo, the deep learning system that achieved a historic win against the world champion of the board game Go in 2016 (Chen 2016; Broussard 2018), or GPT-3, a powerful algorithmic model released in 2020, capable of autonomously writing poems, computer code and even philosophical texts (Weinberg 2020; Askell 2020) – indicate that the ongoing shift toward the increasingly active and autonomous participation of algorithmic systems in the social world is likely to continue into the near future. But let’s have a look at the past first.

Machine Habitus

Подняться наверх