Читать книгу Making Sense of AI - Anthony Elliott - Страница 9
Complex Systems, Intelligent Automation and Surveillance
ОглавлениеOne sometimes hears the opinion that the industry of AI – the tech giants from Silicon Valley to Shenzhen – is inhospitable to critique. AI as a global enterprise has been, over a long period, the sworn enemy to critical thought about what it may control, whilst altogether blocking off engagement with questions of how new technologies might be controlled by other economic powers and political forces. While hospitable to engagement from consumer society, AI industry leaders have been remarkably silent on questions of control, power and exploitation. In retrospect, we can say that AI – both within industry and beyond – has often been presented as a neutral object. Against such trends towards diffusion or neutralization, the critical question remains this: what might it mean to read power and control back into the discourse of AI? The notion that AI is associated with globalization is familiar enough. Science, technology and automated intelligent machines more generally play a fundamental role in the globalizing of AI. However, I seek throughout this book to reframe this issue in terms of an institutional account of AI, developed in terms of interdependent complex systems. The overall direction of AI is to create automated settings of action which are ordered in terms of complex systems at once robust and fragile. This is an important, although nuanced, point – and requires further elaboration. Many commentators emphasize the exponential dynamics of change in contemporary society as a result of AI, but this is often misleading because AI can also contribute to the stabilization of socio-technical systems for long stretches of time. Rather, the point is that AI facilitates persistent structures and durable systems on the one hand, and the break-up, breakdown or disappearance of complex systems on the other hand. Understanding how AI intersects with complex systems which are dynamic, processual and unpredictable is of key importance for grasping the ways in which automated intelligent machines also function as a field of force, a realm of conflict and coercion in which power and control are produced, reproduced and transformed.
Some central notions from complexity theory are developed in this book, especially in chapter 4. In seeking to demonstrate the power interests realized in and through artificial intelligence, it is necessary to characterize the complex systems of AI. Over the course of the twentieth century and into the twenty-first century, a number of interdependent complex systems served to create a major field of AI, spun off from economic, bureaucratic, industrial and military forces, and each typically providing major resources for the advancement of AI in the contemporary world. The interdependent complex systems, as I discuss at length in chapter 4, include:
1 the scale, scope and extensity of AI in terms of research and innovation, industry and enterprise, as well as technologies and consumer products;
2 the intricate interplay of ‘new’ and ‘old’ technologies, and of the role of established technologies persisting or transforming within many modes of more recent AI and automated intelligent machines;
3 the globalization of AI and the centrality of AI technologies and industries in high-tech digital cities;
4 the growing diffusion of AI in modern institutions and everyday life;
5 the trend towards complexity, at once technological and social;
6 the intrusion of AI technologies into lifestyle change, personal life and the self;
7 the transformation of power as a result of AI technologies of surveillance.
The complex systems in which AI is enmeshed in the contemporary world are at once economic, social, political, material and technological. These interconnected complex systems, as I seek to show, should not be reduced to separate ‘factors’ or ‘processes’. There are no automated intelligent machines without complex systems. As a result, AI is a field characterized by transformation, unpredictability, innovation and reversal. The interdependent complex systems of AI are continually adapting, evolving and self-organizing.
In the early decades of the twenty-first century, there have been two major debates about technology and the general conditions of society and world order. One concerns a possible ‘autonomization’ of society and possibly even of culture and politics. The other concerns broad, massive changes in technological systems, sometimes labelled the coming AI revolution. AI is often presented as an alternative to existing society, which is represented by some critics as politically limited or by other critics as fundamentally flawed. The new, complex systems underpinning the stunning technological advances of AI are often pictured as a utopian pathway to a better world and a more equitable society. Advances in AI, especially powerful predictive algorithms, promise an ever-greater digitalized measure of the world. According to some critics, AI is nothing if not mathematical precision. If we return to complexity theory, however, things are not so clear-cut. Utopic forecasts which emphasize precision or control (of people, of systems, of societies) fail to take into account that such interventions – even the so-called exquisitely precise technological interventions of AI – can generate unanticipated, unintended and opposite, or almost opposite, impacts. One reason for this is the force field of tiny but potentially major changes often described as ‘the butterfly effect’. In 1972, Edward Lorenz posed the question: ‘Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?’ Lorenz had been studying computer modelling of weather predictions, and he discovered that certain systems – not only meteorological systems, but traffic systems and transport systems – are intrinsically unstable and unpredictable. Notwithstanding the gigantic transformations and combinations of new technology today, some critics invoke the butterfly effect thesis – of highly improbable and unexpected events – to argue that AI technologies, no matter how powerful and advanced, will always fall short of their predictive mark. James Gleick, in Chaos: Making a New Science, argues that AI is unable to secure the goal of precision control – or, we might add, controlled precision – because the smallest variations in measurement may dramatically disrupt the results.
It has been argued previously that separating right and wrong predictions of the future is a task that not even computational analysis will solve; and, if undertaken, is bound to fail at any rate. Our complex world, as well as our opaque lives and social interactions, are far more labyrinthine, and even chaotic, than the mathematical precision of AI allows. This does not mean, however, that all predictive algorithms circulate in a self-referential, sealed-off technical domain; from the fact that AI can’t explain, or even reveal, the complexity that shapes social events and global trends, it does not follow that automated intelligent machines do not influence global complexity or the engendering of catastrophic change. Perhaps instead of talking about the long-dreamt-of controlled precision, or precise control, of AI, it would be more in keeping with the conditions of current global systems to speak of algorithmic cascades, a never-ending, always incomplete, open-ended and unfinished process whereby the consequences of human–machine interactions spread quickly, irreversibly and often chaotically throughout interdependent global systems. These algorithmic cascades might consist of abrupt switches, sudden collapses, system trips, phase transitions or chaos points. A recent example of such an algorithmic cascade has been the dramatic militarization of the means of automated weapons systems, such as parasite unmanned aerial vehicles (UAVs). These UAVs are in effect tiny flying sensors, with automatically operating algorithms processing information, and have significantly disturbed the assumption that the nation-state has a monopoly on the means of violence, as well as having contributed to the proliferation of new wars. Similar algorithmic cascades can be identified throughout the fields of healthcare, education and social welfare, as well as work, employment and unemployment. The point is that a new cloud of uncertainty appears with the emergence, spread and dissemination of algorithmic cascades. Such AI-driven change is non-linear; there is no easy connecting line between causes and effects. Moreover, algorithmic cascades neither are contrary nor stand in opposition to the complexity, or even chaotic feedback loops, of social organization and social systems; they are, rather, a newly added dimension of complex global systems and, far from arresting its dynamics, add fuel to the fire.
The term ‘interdependent complex systems’ can be misleading, since it leads many people to think of either the cold, detached world of bureaucratic administration or the technical terrain of computational classification. Discussions of technological innovation, as we will see, often tend to assume that AI operates as an ‘enhancement’ for already formed individuals to deploy in their lifestyles, careers, families and wider social interactions. This is perhaps true at some trivial level, but what such writers tend often to miss is that AI technologies are supporting an equally profound transformation of cultural identity. Smartphones, self-driving cars, automated office environments, chatbots, face-recognition technology, drones and now the integration of all these as ‘smart cities’ reconfigure ways of doing things and forms of activity so as to cultivate new configurations of personhood. Just think, for example, of smartphones. Is it right to say that people have these intelligent machines, or are people thoroughly absorbed into the machine? Licklider spoke of a ‘man-machine symbiosis’, as we have seen. Whilst we cannot speak of man any more in such a universal form, Licklider’s general argument arguably holds good. My contention throughout this book is that a critical understanding of AI technologies requires a re-evaluation of the kinds of subjecthood it fosters, while an outline of newly emergent cultural identities must include an elaboration of their relation to AI and automated intelligent machines. But, again, it is essential to see that the emergence of new individual identities or lifestyle options does not operate according only to personal preference or consumer choice – as much of the discussion of the culture of AI tends to assume.
This brings us back to interdependent complex systems. AI is not simply ‘external’ or ‘out there’; it is also ‘internal’ or ‘in here’. AI technologies intrude into the very centre of our lives, deeply influencing personal identity and restructuring forms of social interaction. To say this is to say that AI powerfully impacts how we live, how we work, how we socialize and how we create intimacy, as well as countless other aspects of our public and private lives. But this is not to say, however, that AI is simply a private matter or personal affair. If AI cultivates new configurations of cultural identity, these emergent algorithmic forms of identity are structured, networked and enmeshed in economies of technology. That is to say, today’s profound algorithmic transformation of cultural identity is intricately interwoven with interdependent complex systems.
If AI intrudes into the realms of personal life, lifestyle change and the self, one development which is especially prominent is the ever-increasing automation of large tracts of everyday life. ‘Automated society’ and ‘automated life’ are intimately interwoven. In contemporary algorithmic societies, the automation of forms of life and sectors of experience is driven by an apparently invincible socio-technical dynamic. Automation in this sense has a profoundly transformative impact for almost everyone, a phenomenon which carries both positive and negative consequences. On the positive side of the equation, the promises of automated life include significantly improved efficiency and new freedoms. In the area of healthcare, for example, more and more people now wear self-tracking devices, which monitor their bodies and provide data on sleep patterns, energy expended, heartbeat and other health information. Medical sensors worn by patients provide medical practitioners with biometric information – such as monitoring glucose in diabetics – that is vital for the management of chronic diseases. Advances in medical imaging facilitate the automated exchange of data from hospitals to doctors anywhere in the world. Medical robots can be used to conduct operations using real-time data-collection over indefinite distances and time differences. A parallel set of developments is occurring in education. International collaborative projects can now be conducted with researchers and students communicating with each other anywhere in the world through real-time language translation using applications like Microsoft Teams and OneNote. Personalized learning that deploys AI to adapt teaching methods and pedagogic materials to students studying at their own pace has been rolled out by various online higher education institutions. Automated grading software that frees schoolteachers from the repetition of assessing tests is now commonplace, freeing up time for educators to work more creatively with students.
Such developments have obvious advantages, and many commentators argue that algorithmic intelligent machines bring consistency and objectivity to public service delivery, thus creating huge benefits for society as a whole. Automated systems also provide revolutionary changes, it is argued, to many routine tasks of everyday life. AI is used to automatically craft personalized email and write tweets or blog posts. Smart homes are directly automated environments, from climate control to air conditioning to personal security systems. At work, professionals and senior managers increasingly make decisions powered by automated tools, including software allocating tasks to subordinates as well as the automated evaluation of their performance. In retail, shoppers scan barcodes and pay at the checkout with their smartphones, consumers reserve products and arrange delivery without ever having to interact with store staff, and ever-rising customer expectations and complaints are processed by automated customer-care centres. Indeed, the rise of AI in reshaping everyday life has led Stanford computer scientist John Koza to speak of the age of ‘automated invention machines’. Koza underscores the arrival of a world where smart algorithms don’t just replicate existing commercial designs but ‘think outside the box’, creating new lifestyle options and driving consumer life in entirely new directions.
The recent explosion in data-gathering, data-harvesting and data-profiling underlies not only challenges confronting everyone in terms of lifestyle change and the politics of identity, but also institutional transformations towards a new surveillance reality. A number of interesting questions arise about the quest for collection, collation and coding of ever-larger amounts of data, especially personal data, as regards the rise of digital surveillance. What are the tacit assumptions that underpin contemporary uses of AI technologies on the one hand and questions about data ownership on the other hand? Are people right to be worried that the digital collection of public and private data – from companies and governments alike – appears to become ever more intrusive? How are AI technologies marshalled by companies to manipulate consumer choice? How have governments deployed AI to control citizens? What are the human rights implications inherent in the current phase of AI? What implications follow from AI-powered data-harvesting for self, social relationships and lifestyle change? How can AI and other new technologies be used to counter unfair disadvantages people routinely encounter on the basis of their race, age, gender and other characteristics? What are the emergent connections between data-collection on the most intimate aspects of personal experience and the changing nature of power in the contemporary world? Chapter 7 addresses all of these issues.
When looking at the institutional dimensions of digital surveillance, certain immanent trends are fairly clear. In the contemporary period, ‘surveillance’ refers to two related forms of power. One is the accumulation of ‘mass data’ or ‘big data’, which can be used to influence, reshape, reorganize and transform the activities of individuals and communities about whom the data is gathered. AI has become strongly associated with this codification of data – from the predictive analytics of consumer behaviour to the tracking and profiling of various minority groups. The increasingly automated character of data traces has led many critics to warn of the erosion, undermining and even collapse of human rights in the contemporary age. Bruce Schneier, in Data and Goliath, argues that the capacity of corporations and governments to expand the range of available data pertaining to citizens, consumers and communities is historically greater than it has ever been. An example of this is the modes of data collected by tech giants such as Google, Facebook, Verizon and Yahoo; such large masses of digital data pertaining to the lives of individuals are used, increasingly, to generate profit as well as future commercial and administrative value. Digital surveillance is thus central not only to the AI-powered data knowledge economy, but also to government agencies and related state actors.
Another aspect of digital surveillance is that of the control of the activities of some individuals or sections of society by other powerful agents or institutions. In AI-powered societies, the concentration of controlled activities arises from the deployment of digital technologies to watch, observe, trace, track, record and monitor others through more-or-less continuous surveillance. As discussed in detail in chapter 7, some critics follow Michel Foucault in his selection of Jeremy Bentham’s Panopticon as the prototype of social relations of power – adjusted to digital realities with ‘prisoners’ of today’s corporate offices or private residences kept under a form of twenty-four-hour digital surveillance. Certain kinds of technological monitoring – from CCTV cameras in neighbourhoods equipped with facial recognition software to automated data-tracking through Internet search engines – lend support to this notion that digital surveillance is ever-present and increasingly omnipotent.
Yet in fact the control of social activities through digital surveillance is by no means complete in algorithmic modernity, where data flows are fluid, liquid and often chaotic, and many forms of self-mobilizing and decentred contestation of surveillance appear. This development has not come about, as followers of Foucault contend, because of panoptical power advancing digitalization-from-above techniques of surveillance. Rather, what occurs in the present-day deregulated networks and platforms of digital technology is users sharing information with others, uploading detailed personal information, downloading data through search, retrieval and tagging on cloud computing databases, and a host of other behaviours which contribute to the production, reproduction and transformation of the dense nets that I call disorganized surveillance. We should understand this development in terms of organized, control-from-above, Panopticon-style rule of administered surveillance being phased out and decommissioned. Disorganized surveillance is not so much a control of activities of subordinates by superiors as the dispersion and liquidization of the monitoring of self and others in coactive relation to automated intelligent machines.