Читать книгу Making Sense of AI - Anthony Elliott - Страница 7

What is Artificial Intelligence?

Оглавление

In the case of artificial intelligence, it is widely, though erroneously, assumed that its history can and ought to be mapped, measured and retold by recourse and recourse only to AI studies – and that if any of this history falls outside of the purview of the disciplines of engineering, computer science or mathematics, it might justifiably be ignored or assigned perhaps only a footnote within the canonical bent of AI studies. Such an approach, were it attempted here, would aim at reproducing the rather narrow range of interests of much in the AI field – for example, definitional problems or squabbles concerning the ‘facts of the technology’.1 What, precisely, is machine learning? How did machine learning arise? What are artificial neural networks? What are the key historical milestones in AI? What are the interconnections between AI, robotics, computer vision and speech recognition? What is natural language processing? Such definitional matters and historical facts about artificial intelligence have been admirably well rehearsed by properly schooled computer scientists and experienced engineers the world over, and detailed discussions are available to the reader elsewhere.2

As signalled in its title, this book is a study in making sense of AI, not of AI sense-making. This is not about the technical dimensions or scientific innovations of AI, but about AI in its broader social, cultural, economic, environmental and political dimensions. I am seeking to do something which no other author has attempted. While the existing literature tends to be focused on isolated scientific pioneers in the retelling of the history of AI, the present chapter concerns itself more with cultural shifts and conceptual currents. Something of the same ambition permeates the book as a whole. While much of the existing literature tends to concentrate on specific domains in relation to issues such as work and employment, racism and sexism, or surveillance and ethics, I have sought to register something of the wealth of intricate interconnections between such domains – all the way from lifestyle change and social inequalities to warfare and global pandemics such as COVID-19. In fact, I spend the bulk of my time in this book examining these multidimensional interrelationships to make up for the fact that such interconnections are not usually discussed at all in the field of AI studies. It is, in particular, the close affinity and interaction between AI technologies and complex digital systems, phenomena that in our own time are growing in impact and significance as well as in the opportunities and risks they portend, that I approach – carefully and systematically – in the chapters that follow throughout this book. Finally, while the existing literature tends to be focused on the tech sector in one country or AI industries in specific regions, I have sought to develop a global perspective and offer comparative insights. A general social theory of the interconnections between AI, complex digital systems and the coactive interactions of human–machine interfaces remains yet to be written. But in developing the synthetic approach I outline here, my hope is that this book contributes to making sense of the increasingly diverse blend of humans and machines in the field of automated intelligent agents, and to frame all this theoretically and sociologically with reflections on the dynamics of AI in general and its place in social life.

There is more than one way in which the story of AI can be told. The term ‘artificial intelligence’, as we will examine in this chapter, consists of many different conceptual strands, divergent histories and competing economic interests. One way to situate this wealth of meaning is to return to 1956, the year the term ‘artificial intelligence’ was coined. This occurred at an academic event in the USA, the Dartmouth Summer Research Project, where researchers proposed ‘to find how to make machines use language, form instructions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.3 The Dartmouth Conference was led by the American mathematician John McCarthy, along with Marvin Minsky of Harvard, Claude Shannon of Bell Telephone Laboratories and Nathan Rochester of IBM. Why the conference organizers chose to put the adjective artificial in front of intelligence is not evident from the proposal for funding to the Rockefeller Foundation. What is clear from this infamous six-week event at Dartmouth, however, is that AI was conceived as encompassing a remarkably broad range of topics – from the processing of language by computers to the simulation of human intelligence through mathematics. Simulation – a kind of copying of the natural, transferred to the realm of the artificial – was what mattered. Or, at least, this is what McCarthy and his colleagues believed, designating AI as the field in which to try to achieve the simulation of advanced human cognitive performance in particular, and the replication of the higher functions of the human brain in general.

There has been a great deal of ink spilt on seeking to reconstruct what the Dartmouth Conference organizers were hoping to accomplish, but what I wish to emphasize here is the astounding inventiveness of McCarthy and his colleagues, especially their focus on squeezing then untrained and untested variants of scientific strategies and intellectual hunches anew into the terrain of intelligence designated as artificial. Every culture lives by the creation and propagation of new meanings, and it is perhaps not surprising – at least from a sociological standpoint – that the Dartmouth organizers should have favoured the term ‘artificial’ at a time in which American society was held in thrall to all things new and shiny. The era of 1950s America was of the ‘new is better’, manufactured as opposed to natural, shiny-obsessed sort. It was arguably the dawning of ‘the artificial era’: the epoch of technological conquest and ever more sophisticated machines, designated for overcoming problems of nature. Construction of various categories and objects of the artificial was among the most acute cultural obsessions. Nature was the obvious outcast. Nature, as a phenomenon external to society, had in a certain sense come to an ‘end’ – the result of the domination of culture over nature. And, thanks to the dream of infinity of experiences to be delivered by artificial intelligence, human nature was not something just to be discarded; its augmentation through technology would be an advance, a shift to the next frontier. This was the social and historical context in which AI was ‘officially’ launched at Dartmouth. A world brimming with hope and optimism, with socially regulated redistributions away from all things natural and towards the artificial. In a curious twist, however, jump forward some sixty or seventy years and it is arguably the case that, in today’s world, the term ‘artificial intelligence’ might not have been selected at all. The terrain of the natural, the organic, the innate and the indigenous is much more ubiquitous and relentlessly advanced as a vital resource for cultural life today, and indeed things ‘artificial’ are often viewed with suspicion. The construction of the ‘artificial’ is no longer the paramount measure of socially conditioned approval and success.

Where does all of this leave AI? The field has advanced rapidly since the 1950s, but it is salutary to reflect on the recent intellectual history of artificial intelligence because that very history suggests it is not advisable to try to compress its wealth of meanings into a general definition. AI is not a monolithic theory. To demonstrate this, let’s consider some definitions of AI – selected more or less at random – currently in circulation:

1 the creation of machines or computer programs capable of activity that would be called intelligent if exhibited by human beings;

2 a complex combination of accelerating improvements in computer technology, robotics, machine learning and big data to generate autonomous systems that rival or exceed human capabilities;

3 technologically driven forms of thought that make generalizations in a timely fashion based on limited data;

4 the project of automated production of meanings, signs and values in socio-technical life, such as the ability to reason, generalize, or learn from past experience;

5 the study and design of ‘intelligent agents’: any machine that perceives its environment, takes action that maximizes its goal, and optimizes learning and pattern recognition;

6 the capability of machines and automated systems to imitate intelligent human behaviour;

7 the mimicking of biological intelligence to facilitate the software application or intelligent machine to act with varying degrees of autonomy.

There are several points worth highlighting about this list. First, some of these formulations define artificial intelligence in relationship to human intelligence, but it must be noted that there is no single agreed definition, much less an adequate measurement, of human intelligence. AI technologies can already process our email for spam, recommend what films we might like to watch and scan crowds for particular faces, but these accomplishments do not signify comparison with human capabilities. It might, of course, be possible to make comparisons of AI with rudimentary numeric measurements of human intelligence such as IQ, but it is surely not hard to show what is wrong with such a case. There is a difference between the numeric measurement of intelligence and native human intelligence. Cognitive processes of reasoning may indeed provide a yardstick for assessing progress in AI, but there are also other forms of intelligence. How people intuit each other’s emotions, how people live with uncertainty and ambivalence, or how people gracefully fail others and themselves in the wider world: these are all indicators of intelligence not easily captured by this list of definitions.

Second, we may note that some of these formulations of AI seem to raise more questions than they can reasonably hope to answer. On several of these definitions, there is a direct equation between machine intelligence and human intelligence, but it is not clear whether this addresses only instrumental forms of (mathematical) reasoning or emotional intelligence. What of affect, passion and desire? Is intelligence the same as consciousness? Can non-human objects have intelligence? What happens to the body in equating machine and human intelligence? The human body is arguably the most palpable way in which we experience the world; it is the flesh and blood of human intelligence. The same is not true of machines with faces, and it is fair to say that all of the formulations on this list displace the complexity of the human body. These definitions are, in short, remorselessly abstract, indifferent to different forms of intelligence as well as detached from the whole human business of emotion, affect and interpersonal bonds.

Third, we can note that some of these formulations are sanguine, others ambiguously so, and some altogether over-estimate the capabilities of AI today and in the near future. An interesting feature of many of these formulations is that they tend to flatten AI into a monolithic entity. Today, AI can be a virtual personal assistant, a self-driving car, a robot, a smart lift or a drone. But it is not obvious that many of these formulations can easily cope with these gradations or differentiations of machine intelligence. A smart elevator using AI to manage the flow of demand in an office building based on data collected from daily usage, for example, is essentially goal-orientated and single in technological objective. It is an example of weak or narrow AI, where machine intelligence can only do what it is programmed to do, based on a very limited range of contexts and parameters. Examples of narrow AI range from Google Search to facial recognition software to Apple’s Siri, and these are all quite basic kinds of automated machine intelligence. They have been programmed to perform a single task well yet cannot switch to perform other types of tasks – or, at least, not without considerable further labour performed by engineers and computer scientists. On the other hand, there are more sophisticated forms of AI. Deep AI, or what is termed artificial general intelligence, is an advanced form of self-learning machine intelligence seeking to replicate human intelligence. Unlike narrow AI technologies, deep AI combines insights from different fields of activity, performs multiple tasks of intelligence and displays considerable flexibility and dexterity. Deep AI entails the harnessing of massive computational processing power – for instance, the Summit supercomputer, which, in performing 200 million billion calculations per second, is among the fastest computers in the world – to machine learning algorithms. Arguably one of the best operational examples of deep AI is IBM’s Watson, a system which combines supercomputing with deep learning algorithms: such algorithms are designed to optimize their performance against specified data-processing criteria (such as speech or facial recognition, or medical diagnosis) through self-adjusting the thresholds of what is relevant or irrelevant in the data under analysis. Another AI variant is that of superintelligence, which doesn’t exist yet, but is forecast by many specialists to involve a fully fledged machine intelligence which outstrips human intelligence in every domain, including both cognitive reasoning and social skills. Superintelligence has long been the preserve of Hollywood science fiction, and the personalized AI system of Samantha in the film Her is a signal example. (We will turn to consider technological advances related to superintelligence in more detail in Chapter 8.)

One of the problems of current debate is that there is a lot of hype, a lot of misconceptions and too many overblown claims about AI. One way of reading AI against the grain is to avoid the specialist definitions circulating in the field and talk about resistances, disorders and the historical past instead. It is always useful to get a sense of how a specialist discourse is approached by those outside of its representative institutions, and similarly it helps to look at the prehistory of an emergent technology. This line between the ‘official’ and the ‘unofficial’ version of AI is not always easy to cross, but I want to focus briefly on considering aspects of the prehistory of AI – in order to better grasp the constitution of the whole discourse of AI. That is to say, I want to focus on the function of ideas within and around AI – including the aspirations, objectives and dreams of technologists – in order to better situate today’s technological realities as well as its manifold distortions. In other words, my aim here is to return AI to its own displaced history.

An objection to the glossy image presented by various tech companies that AI has only recently arrived, and arrived fully formed, is that machine intelligence and mechanical automatons are, in fact, historical through and through. Those advocating the technological hype of our times may not wish to be embroiled in trawling through the histories and counter-histories of various technologies, but expanding the historical boundaries of the discourse of AI by bringing back into consideration those developments banished to the background and left out of the official narrative is essential to combating the idea that AI is a straightforward, linear story which runs roughly from the 1956 Dartmouth Conference to the present day. The developments that unite an otherwise disparate and apparently unconnected series of topics in the emergence of AI require us to go back to the eighth century bc, where automatons and robots crop up in Greek myths such as that of Talos of Crete.4 Or you have to go back to the ancient world of Mesopotamia, where Muslim polymath Ismail Ibn al-Razzaz al-Jazari invented automatic gates and automated doors driven by hydropower, whilst simultaneously penning his programmatic text, The Book of Knowledge of Ingenious Mechanical Devices.5 An alternative historical starting point might be the ancient philosophy of Aristotle, who wrote of artificial slaves in his foundational Politics.6

Fast forward to the early modern period in Europe, where the landscape of automatons is still largely about dreaming but also where conflicts between human and machine intelligence become amenable to, and await, resolution. Early modern European thought in cooperation with scientific reason found its way towards such conflict resolution under the twin banners of calculation and mechanics. The French philosopher, mathematician and scientist René Descartes compared the bodies of animals to complex machines. In the political thought of Thomas Hobbes, a mechanical theory of cognition stood for the human territory over which reason extended. In the practice of French mathematician and inventor Blaise Pascal, arithmetical calculations stood for the feasibility and ultimate triumph of the theory of probability – as this prodigious physicist and Catholic theologian worked obsessively to build mechanical prototypes and calculating machines. Fast forward again some centuries and we find writers and artists alike viewing a society leaning solely on human attributes or natural impulses with considerable suspicion. Throughout the modern era, from Mary Shelley’s Frankenstein to Karel Čapek’s Rossum’s Universal Robots, reality was to be shaped, thought about and interpreted with reference to automatons, cyborgs and androids. At the dawn of the twentieth century, the dream of automated machines was brought finally and firmly inside the territory where empirical testing is done, most notably with a tide-predicting mechanical computer – commonly known as Old Brass Brains – developed by E. G. Fischer and Rolin Harris.7 The world had, at long last, shifted away from the ‘natural order of things’ towards something altogether more magical: the ‘artificial order of mechanical brains’.

For most people today, AI is equated with Google, Amazon or Uber, not ancient philosophy or mechanical brains. However, there remain earlier, historical prefigurations of AI which still resonate with our current images and cultural conversations about automated intelligent machines. One such pivot point comes from the UK in the early 1950s, when the English polymath Alan Turing – sometimes labelled the grandfather of AI – raised the key question ‘can machines think?’8 Turing, who had been involved as a mathematician in important enemy code breaking during World War II, raised the prospect that automated machines represent a continuation of thinking by other means. Thinking in the hands of Turing becomes a kind of conversation, a question-and-answer session between human and machine. Turing’s theory of machines thinking was based on a British cocktail party game, known as ‘the imitation game’, in which a person was sent into another room of the house and guests had to try to guess their assumed identity. In Turing’s reworking of this game, a judge would sit on one side of a wall and, on the other side of the wall, there would be a human and a computer. In this game, the judge would chat to mysterious interlocutors on the other side of the screen, and the aim was to try to trick the judge into thinking that the answers coming from the computational agent were, in fact, coming from the flesh-and-blood agent. This experiment became known as the Turing Test.

There has been, then, a wide and widening gamut of automated technological advances, symptomatic of the shift from thinking machines that may equal the intelligence of humans to thinking machines that may exceed the intelligence of humans, but all of which have been and remain highly contested. Whether automated intelligent machines are likely to surpass human intelligence not only in practical applications but in a more general sense figures prominently among the major issues of our times and our lives in these times. Notwithstanding the notoriously overoptimistic claims of various AI researchers and futurists, there has been an overwhelming sense of crisis confronted by scientists, philosophers and theorists of technology alike, in greater or smaller measure, that the feverish ambition to establish whether AI could ever really be smarter than humans has resulted in a new structure of feeling where humanity is ‘living at the crossroads’. There have been, it should be noted, some very vocal and often devastating critiques of AI developed in this connection. The philosopher Hubert Dreyfus was an important early critic. In his book What Computers Can’t Do, Dreyfus argued that the equation mark put between machine and human intelligence in AI was fundamentally flawed. To the question of whether we might eventually regard computers as ‘more intelligent’ than humans, Dreyfus answered that the structure of the human mind (both its conscious and unconscious architectures) could not be reduced to the mathematical precepts which guide AI. Computers, as Dreyfus put it, altogether lack the human ability to understand context or grasp situated meaning. Essentially reliant on a simple set of mathematical rules, AI is unable, Dreyfus argued, to grasp the ‘systems of reference’ of which it is a part.

Another critique, arguably more damaging, of the limitations in equating human and machine intelligence was developed by the American philosopher John Searle. Searle was strongly influenced by the philosophical departures of Ludwig Wittgenstein, especially Wittgenstein’s demonstration that what gives ordinary language its precision is its use in context. When people meet and mingle, they use contextual settings to define the nature of what is said. This time-and-effort contextual activity of putting meaning together, practised and rehearsed daily by humans, is not something that AI can substitute for, however. To demonstrate this, Searle provided what he famously termed the ‘Chinese Room Argument’. As he explains:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.9

The upshot of Searle’s arguments is clear. Machine and human intelligence might mirror each other in chiasmic juxtaposition, but AI is not able to capture the human ability of constantly connecting words, phrases and talk within practical contexts of action. Meaning and reference are, in short, not reducible to a form of information processing. It was Wittgenstein that pointed out that a dog may know its name, but not in the same way that her master does. Searle demonstrates this is similarly true for computers. It is this human ability to understand context, situation and purpose within modalities of day-to-day experience that Searle, powerfully and provocatively, asserts in the face of comparisons between human and machine intelligence.

Making Sense of AI

Подняться наверх