Читать книгу Machine Habitus - Massimo Airoldi - Страница 15

Open questions and feedback loops

Оглавление

The notion of ‘feedback loop’ is widely used in biology, engineering and, increasingly, in popular culture: if the outputs of a technical system are routed back as inputs, the system ‘feeds back’ into itself. Norbert Wiener – the founder of cybernetics – defines feedback as ‘the property of being able to adjust future conduct by past performance’ (1989: 33). According to Wiener, feedback mechanisms based on the measurement of performance make learning possible, both in the animal world and in the technical world of machines – even when these are as simple as an elevator (1989: 24). This intuition turned out to be crucial for the subsequent development of machine learning research. However, how feedback processes work in socio-cultural contexts is less clear, especially when these involve both humans and autonomous machines. While mid-twentieth-century cyberneticians like Wiener saw the feedback loop essentially as a mechanism of control producing stability within complex systems, they ‘did not quite foresee its capacity to generate emergent behaviours’ (Amoore 2019: 11). In the words of the literary theorist Katherine Hayles: ‘recursivity could become a spiral rather than a circle’ (2005: 241, cited in Amoore 2019).

Consider as an example a simplified portrait of product recommendations on e-commerce platform Amazon. Input data about platform users’ purchasing behaviour are fed in real time into an algorithmic model, which considers two products as ‘related’ if they are frequently bought together (Smith and Linden 2017; Hardesty 2019). By learning from customers’ datafied behaviour, the system generates as output a personalized list of items related to the browsed product. On the other side of the screen, millions of Amazon customers navigate recommended products, and decide whether to purchase some of them, or not. It is estimated that automated recommendations alone account for most of Amazon’s revenues (Celma 2010: 3). Since users largely rely on the algorithm to decide what to purchase next, and the algorithm analyses users’ purchasing patterns to decide what to recommend, a feedback loop is established: the model attempts to capture user preferences without accounting for the effect of its recommendations and, as a result, input data are ‘confounded’ by output results (Chaney, Stewart and Engelhardt 2018; Salganik 2018). This techno-social process has implications that go well beyond the engineering aspects of the system. Feedback loops in recommender algorithms are believed to lead to the path-dependent amplification of patterns in the data, eventually encouraging the formation of filter bubbles and echo chambers (Jiang et al. 2019). The idea here is that the very same digital environment from which the algorithm learns is significantly affected by it. Or, in the words of STS scholars MacKenzie and Wajcman (1999), the social shaping of technology and the technological shaping of society go hand in hand. Which leads to our two main sociological questions.

The first one is about the social shaping of algorithmic systems, or the culture in the code (Chapter 2). Platform algorithms like the one in the example above can autonomously ‘learn’ from users’ datafied discourses and behaviours, which carry traces of the cultures and social contexts they originated from. For instance, in 2017, Amazon’s recommender algorithm proposed as ‘related items’ the ingredients for making an artisanal bomb (Kennedy 2017). The recommendation system suggested to customers a deadly combination of products, most likely following the scary shopping habits of a bunch of (wannabe?) terrorists. That was one of the (many) cultures inscribed in the platform data, then picked up by the algorithm as a supposedly innocent set of correlational patterns. Far from being an isolated case, this incident is only one in a long list of algorithmic scandals covered by the press. Microsoft’s infamous chatbot ‘Tay’, which eventually started to generate racist tweets in response to interactions with social media users (Desole 2020), or the ‘sexist’ algorithm behind Apple’s credit card – allegedly offering higher spending limits to male customers (Telford 2019) – are other examples of how machine learning can go wrong.

The main way in which the critical literature surveyed above has dealt with these cases is through the notion of bias. Originated in psychology, this notion indicates a flawed, distorted and ‘unfair’ form of reasoning, implicitly opposed to an ideal ‘neutral’ and ‘fair’ one (Friedman and Nissenbaum 1996). Researchers have rushed to find practical recipes for ‘unbiasing’ machine learning systems and datasets, aiming to address instances of algorithmic discrimination. Still, these attempts are often ex post interventions that ignore the cultural roots of bias in AI (Mullainathan 2019), and risk paradoxically giving rise to new forms of algorithmic censorship. As Završnik puts it: ‘algorithms are “fed with” data that is not “clean” of social, cultural and economic circumstances […]. However, cleaning data of such historical and cultural baggage and dispositions may not be either possible or even desirable’ (2019: 11). While the normative idea of bias has in many cases served to fix real-life cases of algorithmic discrimination and advance data policies and regulations, it hardly fits the sociological study of machine learning systems as social agents. In fact, from a cultural and anthropological perspective, the worldviews of any social group – from a national community to a music subculture – are necessarily biased in some way, since the socially constructed criteria for ultimately evaluating and valuating the world vary from culture to culture (Barth 1981; Latour and Woolgar 1986; Bourdieu 1977). Hence, the abstract idea of a ‘bias-free’ machine learning algorithm is logically at odds with this fundamental premise. ‘Intelligent’ machines inductively learn from culturally shaped human-generated data (Mühlhoff 2020). As humans undergo a cultural learning process to become competent social agents – a process also known as ‘socialization’5 – it can be argued that machine learning systems do so too, and that this bears sociological relevance (Fourcade and Johns 2020). Here I propose to see Amazon’s controversial recommendations and Tay’s problematic tweets as the consequences of a data-driven machine socialization. Since user-generated data bear the cultural imprint of specific social contexts, a first open question is: how are algorithms socialized?

A second question raised by techno-social feedback mechanisms is about the so-called ‘humans in the loop’, and how they respond to algorithmic actions. This concerns the technological shaping of society (MacKenzie and Wajcman 1999), or what in this book I call code in the culture (Chapter 3). The outputs of recommender systems, search engines, chatbots, digital assistants, information-filtering algorithms and similar ‘calculative devices’ powerfully orient the everyday lives of billions of people (Amoore and Piotukh 2016; Beer 2017; Esposito 2017). We know that individuals massively – and, to some extent, unwittingly – rely on algorithmic systems in their decision making. For instance, Netflix’s recommendation system is estimated to influence choice for ‘about 80% of hours streamed’, with ‘the remaining 20%’ coming ‘from search, which requires its own set of algorithms’ (Gomez-Uribe and Hunt 2015: 5). Similar figures can be found for other platforms, including Amazon, and they explain the spectacular marketing success of recommender algorithms (Celma 2010; Konstan and Riedl 2012; Ansari, Essegaier and Kohli 2000). Myriad feedback loops like the one sketched above constellate our digitally mediated existence, eventually producing self-reinforcement effects that translate into a reduced or increased exposure to specific types of content, selected based on past user behaviour (Bucher 2012a). By modulating the visibility of social media posts, micro-targeted ads or search results, autonomous systems not only mediate digital experiences, but ‘constitute’ them (Beer 2009), often by ‘nudging’ individual behaviours and opinions (Christin 2020; Darmody and Zwick 2020). What happens is that ‘the models analyze the world and the world responds to the models’ (Kitchin and Dodge 2011: 30). As a result, human cultures end up becoming algorithmic cultures (Striphas 2015).

Critical research has mainly dealt with this ‘social power’ of algorithms as a one-way effect (Beer 2017), with the risk of putting forward forms of technological determinism – implicit, for instance, in most of the literature about filter bubbles (Bruns 2019: 24). Yet, recent studies show that the outputs of autonomous machines are actively negotiated and problematized by individuals (Velkova and Kaun 2019). Automated music recommendations or micro-targeted ads are not always effective at orienting the taste and consumption of platform users (Siles et al. 2020; Ruckenstein and Granroth 2020; Bucher 2017). Algorithms do not unidirectionally shape our datafied society. Rather, they intervene within it, taking part in situated socio-material interactions involving both human and non-human agents (Law 1990; D. Mackenzie 2019; Burr, Cristianini and Ladyman 2018; Orlikowski 2007; Rose and Jones 2005). Hence, the content of ‘algorithmic culture’ (Striphas 2015) is the emergent outcome of techno-social interactional dynamics. From this point of view, my deciding whether or not to click on a recommended book on Amazon represents an instance of human–machine interaction – which is, of course, heavily engineered to serve the commercial goals of platforms. Nonetheless, in this digitally mediated exchange, both the machine learning algorithm and I maintain relative margins of freedom. My reaction to recommendations will be immediately measured by the system, which will behave differently in our next encounter, also based on that feedback. On my end, I will perhaps discover new authors and titles thanks to this particular algorithm, or – as often happens – ignore its automated suggestions.

Paradoxically, since machine learning systems adapt their behaviour probabilistically based on input data, the social outcomes of their multiple interactions with users are difficult to predict a priori (Rahwan et al. 2019; Burr, Cristianini and Ladyman 2018; Mackenzie 2015). They will depend on individual actions and reactions, on the specific code of the algorithm, and on the particular data at the root of its ‘intelligence’. In order to study how algorithms shape culture and society, Neyland (2019) suggests we leave aside the abstract notion of algorithmic power and try instead to get to know autonomous machines more closely, by looking at their ‘everyday life’. Like ‘regular’ social agents, the machine learning systems embedded in digital platforms and devices take part in the social world (Esposito 2017) and, as with the usual subjects of sociological investigation, the social world inhabits them in turn. A second open question for a sociology of algorithms is therefore: how do socialized machines participate in society – and, by doing so, reproduce it?

These open questions about the culture in the code and the code in the culture are closely related. A second-order feedback loop is implicit here, one that overlooks all the countless interactions between algorithms and their users. It consists in the recursive mechanism through which ‘the social’ – with its varying cultural norms, institutions and social structures – is reproduced by the actions of its members, who collectively make society while simultaneously being made by it. If you forget about algorithms for a second, you will probably recognize here one of the foundational dilemmas of the social sciences, traditionally torn by the complexities of micro–macro dynamics and cultural change (Coleman 1994; Giddens 1984; Bourdieu 1989a; Strand and Lizardo 2017). In fact, while it can be argued that social structures like class, gender or ethnicity ‘exercise a frequently “despotic” effect on the behaviour of social actors’ – producing statistically observable regularities in all social domains, from political preferences to musical taste – these very same structures ‘are the product of human action’ (Boudon and Bourricaud 2003: 10). Since the times of Weber and Durkheim, sociologists have attempted to explain this paradox, largely by prioritizing one out of two main opposing views, which can be summarized as follows: on the one side, the idea that social structures powerfully condition and determine individual lives; on the other, the individualistic view of a free and agentic subject that makes society from below.

In an attempt to overcome the dualism between the ‘objective’ structuring of individuals and the ‘subjective’ character of social action, a French sociologist with a background in philosophy developed an original theoretical framework, whose cornerstone is the notion of ‘habitus’. He was Pierre Bourdieu (1930–2002), who is unanimously considered one of the most influential social thinkers of the twentieth century. Aiming to deal with a different (but related) dualism – that is, between ‘the technical’ and ‘the social’ in sociological research – this book seeks to treat machine learning algorithms ‘with the same analytical machinery as people’ (Law 1990: 8). I will build on the analytical machinery originally developed by Bourdieu, and argue that the particular ways in which these artificial social agents act in society should be traced back to the cultural dispositions inscribed in their code.

Machine Habitus

Подняться наверх