Читать книгу In AI We Trust - Helga Nowotny - Страница 7
The maze and the labyrinth
ОглавлениеNone of these encounters and discussions prepared me for the surprise I got when I began to scan the available literature more systematically. There is a lot of it out there already, and a never-ending stream of updates that keep coming in. I concluded that much of it must have been written in haste, as if trying to catch up with the speed of actual developments. Sometimes it felt like being on an involuntary binge, overloaded with superfluous information while feeling intellectually undernourished. Most striking was the fact that the vast majority of books in this area espouse either an optimistic, techno-enthusiastic view or a dystopian one. They are often based on speculations or simply describe to a lay audience what AI nerds are up to and how digital technologies will change people’s lives. I came away with a profound dissatisfaction about how issues and topics that I considered important were being treated: the approach was largely short-term and ahistorical, superficial and mostly speculative, often espousing a narrow disciplinary perspective, unable to connect technological developments with societal processes in a meaningful way, and occasionally arrogant in dismissing ‘the social’ or misreading it as a mere appendix to ‘the technological’.
Plenty of books on AI and digitalization continue to flood the market. Most of the literature is written in an enthusiastic, technology-friendly voice, but there is also a sharp focus on the dark side of digital technologies. The former either provide a broad overview of the latest developments in AI and their economic benefits, or showcase some recently added features that are intended to alleviate fears that the machines will soon take over. The social impact of AI is acknowledged, as is the desirability of cross-disciplinary dialogue. A nod towards ethical considerations has by now become obligatory, but other problems are sidestepped and expected to be dealt with elsewhere. Only rarely, for instance, do we hear about topics like digital social justice. Finding my way through the copious literature on AI felt at times like moving through a maze, a deliberately confusing structure designed to prevent escape.
In this maze there are plenty of brightly lit pathways, their walls lined with the latest gadgetry, proudly displaying features designed to take the user into a virtual wonderland. The darker groves in the maze are filled with images and dire warnings of worse things to come, occasionally projecting a truly apocalyptic digital ending. Sci-fi occupies several specialized niches, often couched in an overload of technological imagination and an underexposed social side. In between there are a large number of mundane small pathways, some of which turn out to be blind alleys. One can also find useful advice on how to cope with the daily nitty-gritty annoyances caused by digital technologies or how to work around the system. Plenty of marketing pervades the maze, conveying a sense of short-lived excitement and a readiness to be pumped up again to deliver the next and higher dose of digital enhancement.
At times, I felt that I was no longer caught in a maze but in what had become a labyrinth. This was particularly the case when the themes of the books turned to ‘singularity’ and transhumanism, topics that can easily acquire cult status and are permeated by theories, fantasies and speculations that the human species will soon transcend its present cognitive and physical limitations. In contrast to a maze with its tangled and twisted features, dead ends and meandering pathways, a labyrinth is carefully designed to have a centre that can be reached by following a single, unicursal path. It is artfully, and often playfully, arranged around geometrical figures, such as a circle or a spiral. No wonder that labyrinths have inspired many writers and artists to play with these forms and with the meaning-laden concept of a journey. If the points of departure and arrival are the same, the journey between them is expected to have changed something during the course of it. Usually, this is the self. Hence the close association of the labyrinth with a higher state of awareness or spiritual enlightenment.
The labyrinth is an ancient cultic place, symbolizing a transformation, even if we know little about the rituals that were practised there. In the digital age, the imagined centre of the digital or computational labyrinth is the point where AI overtakes human intelligence, also called the singularity. At this point the human mind would be fused with an artificially created higher mind, and the frail and ageing human body could finally be left behind. The body and the material world are discarded as the newborn digital being is absorbed by the digital world or a higher digital order. Here we encounter an ancient fantasy, the recurring dream of immortality born from the desire to become like the gods, this time reimagined as the masters of the digital universe. I was struck by how closely the discussion of transcendental topics, like immortality or the search for the soul in technology, could combine with very technical matters and down-to-earth topics in informatics and computer science. I seemed that the maze could transform itself suddenly into a labyrinth, and vice versa.
In practice, however, gaps in communication prevail. Those who worry about the potential risks that digital technologies pose for liberal democracies discover that experts working on the risks have little interest in democracy or much understanding of politics. Those writing on the future of work rarely speak to those engaged in the actual design of the automated systems that will either put people out of work or create new jobs. Many computer scientists and IT experts are clearly aware of the biases and other flaws in their products, and they deplore the constraints that come from being part of a larger technological system. But at heart they are convinced that the solutions to many of the problems besetting society will arise from technology. Meanwhile, humanists either retreat to their historical niche or act in defence of humanistic values. The often-stated goal of interdisciplinarity, it seems, is not yet much advanced in practice.
I came away from the maze largely feeling that it is an overrated marketplace where existing products are rapidly displaced by new ones selected primarily for their novelty value. Depending on the mood of potential buyers, utopian or dystopian visions would prevail, subject to market volatility. The labyrinth, of course, is a more intriguing and enchanting place where deep philosophical questions intersect with the wildest speculations. Here, at times, I felt like Ariadne, laying out the threads that would lead me out from the centre of the labyrinth. One of these threads is based on the idea of a digital humanism, a vision that human values and perspectives ought to be the starting point for the design of algorithms and AI systems that claim to serve humanity. It is based on the conviction that such an alternative is possible.
Another thread is interwoven with the sense of direction that takes its inspiration from a remarkable human discovery: the idea of the future as an open horizon, full of as yet unimaginable possibilities and inherently uncertain. The open horizon extends into the vast space of what is yet unknown, pulsating with the dynamics of what is possible. Human creativity is ready to explore it, with science and art at the forefront. It is this conception of the future which is at stake when predictive algorithms threaten to fill the present with their apparent certainty, and when human behaviour begins to conform to these predictions.
The larger frame of this book is set by a co-evolutionary trajectory on which humankind has embarked together with the digital machines it has invented and deployed. Co-evolution means that a mutual interdependence is in the making, with flexible adaptations on both sides. Digital beings or entities like the robots created by us are mutating into our significant Others. We have no clue where this journey will lead or how it will end. However, in the long course of human evolution, it is possible that we have become something akin to a self-domesticating species that has learned to value cooperation and, at least to some extent, decrease its potential for aggression. That capacity for cooperation could now extend to digital machines. We have already reached the point of starting to believe that the algorithm knows us better than we know ourselves. It then comes to be seen as a new authority to guide the self, one that knows what is good for us and what the future holds.