Читать книгу Machine Habitus - Massimo Airoldi - Страница 14

Critical algorithm studies

Оглавление

When algorithms started to be applied to the digital engineering of the social world, only a few sociologists took notice (Orton-Johnson and Prior 2013). In the early 2000s, the sociological hype about the (then new) social networking sites, streaming platforms and dating services was largely about the possible emancipatory outcomes of an enlarged digital connectivity, the disrupting research potential of big data, and the narrowing divide between ‘real’ and ‘virtual’ lives (Beer 2009). However, at the periphery of academic sociology, social scientists working in fields like software studies, anthropology, philosophy, cultural studies, geography, Internet studies and media research were beginning to theorize and investigate the emergence of a new ‘algorithmic life’ (Amoore and Piotukh 2016). In the past decade, this research strand has grown substantially, disrupting disciplinary borders and setting the agenda of important academic outlets.4 Known as ‘critical algorithm studies’ (Gillespie and Seaver 2016), it proposes multiple sociologies of algorithms which tackle various aspects of the techno-social data assemblages behind AI technologies.

A major part of this critical literature has scrutinized the production of the input of automated calculations, that is, the data. Critical research on the mining of data through digital forms of surveillance (Brayne 2017; van Dijck 2013) and labour (Casilli 2019; Gandini 2020) has illuminated the extractive and ‘panopticist’ character of platforms, Internet services and connected devices such as wearables and smartphones (see Lupton 2020; Ruckenstein and Granroth 2020; Arvidsson 2004). Cheney-Lippold (2011, 2017) developed the notion of ‘algorithmic identity’ in order to study the biopolitical implications of web analytics firms’ data harnessing, aimed at computationally predicting who digital consumers are. Similar studies have also been conducted in the field of critical marketing (Cluley and Brown 2015; Darmody and Zwick 2020; Zwick and Denegri-Knott 2009). Furthermore, a number of works have questioned the epistemological grounds of ‘big data’ approaches, highlighting how the automated and decontextualized analysis of large datasets may ultimately lead to inaccurate or biased results (boyd and Crawford 2012; O’Neil 2016; Broussard 2018). The proliferation of metrics and the ubiquity of ‘datafication’ – that is, the transformation of social action into online quantified data (Mayer-Schoenberger and Cukier 2013) – have been indicated as key features of today’s capitalism, which is seen as increasingly dependent on the harvesting and engineering of consumers’ lives and culture (Zuboff 2019; van Dijck, Poell and de Waal 2018).

As STS research did decades earlier with missiles and electric bulbs (MacKenzie and Wajcman 1999), critical algorithm studies have also explored how algorithmic models and their data infrastructures are developed, manufactured and narrated, eventually with the aim of making these opaque ‘black boxes’ accountable (Pasquale 2015). The ‘anatomy’ of AI systems is the subject of the original work of Crawford and Joler (2018), at the crossroads of art and research. Taking Amazon Echo – the consumer voice-enabled AI device featuring the popular interface Alexa – as an example, the authors show how even the most banal human–device interaction ‘requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data’ (Crawford and Joler 2018: 2). Behind the capacity of Amazon Echo to hear, interpret and efficiently respond to users’ commands, there is not only a machine learning model in a constant process of optimization, but also a wide array of accumulated scientific knowledge, natural resources such as the lithium and cobalt used in batteries, and labour exploited in the mining of both rare metals and data. Several studies have looked more closely into the genesis of specific platforms and algorithmic systems, tracing their historical evolution and practical implementation while simultaneously unveiling the cultural and political assumptions inscribed in their technicalities (Rieder 2017; D. MacKenzie 2018; Helmond, Nieborg and van der Vlist 2019; Neyland 2019; Seaver 2019; Eubanks 2018; Hallinan and Striphas 2016; McKelvey 2018; Gillespie 2018). Furthermore, since algorithms are also cultural and discursive objects (Beer 2017; Seaver 2017; Bucher 2017; Campolo and Crawford 2020), researchers have investigated how they are marketed and – as often happens – mythicized (Natale and Ballatore 2020; Neyland 2019). This literature shows how the fictitious representation of calculative devices as necessarily neutral, objective and accurate in their predictions is ideologically rooted in the techno-chauvinistic belief that ‘tech is always the solution’ (Broussard 2018: 7).

A considerable amount of research has also asked how and to what extent the output of algorithmic computations – automated recommendations, micro-targeted ads, search results, risk predictions, etc. – controls and influences citizens, workers and consumers. Many critical scholars have argued that the widespread delegation of human choices to opaque algorithms results in a limitation of human freedom and agency (e.g. Pasquale 2015; Mackenzie 2006; Ananny 2016; Beer 2013a, 2017; Ziewitz 2016; Just and Latzer 2017). Building on the work of Lash (2007) and Thrift (2005), the sociologist David Beer (2009) suggested that online algorithms not only mediate but also ‘constitute’ reality, becoming a sort of ‘technological unconscious’, an invisible force orienting Internet users’ everyday lives. Other contributions have similarly portrayed algorithms as powerful ‘engines of order’ (Rieder 2020), such as Taina Bucher’s research on how Facebook ‘programmes’ social life (2012a, 2018). Scholars have examined the effects of algorithmic ‘governance’ (Ziewitz 2016) in a number of research contexts, by investigating computational forms of racial discrimination (Noble 2018; Benjamin 2019), policy algorithms and predictive risk models (Eubanks 2018; Christin 2020), as well as ‘filter bubbles’ on social media (Pariser 2011; see also Bruns 2019). The political, ethical and legal implications of algorithmic power have been discussed from multiple disciplinary angles, and with a varying degree of techno-pessimism (see for instance Beer 2017; Floridi et al. 2018; Ananny 2016; Crawford et al. 2019; Campolo and Crawford 2020).

Given the broad critical literature on algorithms, AI and their applications – which goes well beyond the references mentioned above (see Gillespie and Seaver 2016) – one might ask why an all-encompassing sociological framework for researching intelligent machines and their social implications should be needed. My answer builds on a couple of questions which remain open, and on the understudied feedback loops lying behind them.

Machine Habitus

Подняться наверх