Читать книгу In AI We Trust - Helga Nowotny - Страница 8

The road ahead: how to live forward and understand life backwards

Оглавление

Scientific predictions are considered the hallmark of modern science. Notably physics advances by inventing new theoretical concepts and the instruments to test predictions derived from them. The computational revolution that began in the middle of the last century has been boosted by the vastly increased computational power and Deep Learning methods that took off in the twenty-first century. Together with access to an unprecedented and still growing amount of data, these developments have extended the power of predictions and their applicability across an enormous range of natural and social phenomena. Scientific predictions are no longer confined to science.

Ever since, predictive analytics has become highly profitable for the economy and pervaded the entire social fabric. The operation of algorithms underlies the functioning of technological products that have disrupted business models and created new markets. Harnessed by the marketing and advertisement industry, instrumentalized by politicians seeking to maximize votes, and quickly adopted by the shadowy world of secret services, hackers and fraudsters exploiting the anonymity of the internet, the use of predictive analytics has convinced consumers, voters and health-conscious citizens that these powerful digital instruments are there to serve our needs and latent desires.

Much of their successful spread and eager adoption is due to the fact that the power of predictive algorithms is performative. An algorithm has the capability to make happen what it predicts when human behaviour follows the prediction. Performativity means that what is enacted, pronounced or performed can affect action, as shown in the pioneering work on the performativity of speech acts and non-verbal communication by J. L. Austin, Judith Butler and others. Another well-known social phenomenon is captured in the Thomas theorem – ‘If men define situations as real, they are real in their consequences’ – dating back to 1928 and later reformulated by Robert K. Merton in terms of self-fulfilling prophecy. The time has come to acknowledge what sociologists have long since known and apply it also to predictive algorithms.

The propensity of people to orient themselves in relation to what others do, especially in unexpected or threatening circumstances, enhances the power of predictive algorithms. It magnifies the illusion of being in control. But if the instrument gains the upper hand over understanding we lose the capacity for critical thinking. We end up trusting the automatic pilot while flying blindly in the fog. There are, however, situations in which it is crucial to deactivate the automatic pilot and exercise our own judgement as to what to do.

When visualizing the road ahead, I see a situation where we have created a highly efficient instrument that allows us to follow and foresee the evolving dynamics of a wide range of phenomena and activities, but where we largely fail to understand the causal mechanisms that underlie them. We rely increasingly on what predictive algorithms tell us, especially when institutions begin to align with their predictions, often unaware of the unintended consequences that will follow. We trust not only the performative power of predictive analytics but also that it knows which options to lay out for us, again without considering who has designed these options and how, or that there might be other options equally worth considering.

At the same time, distrust of AI creeps in and the concerns grow. Some of them, like the fears about surveillance or the future of work, are well known and widely discussed. Others are not so obvious. When self-fulfilling prophecies begin to proliferate, we risk returning to a deterministic worldview in which the future appears as predetermined and hence closed. The space vital to imagining what could be otherwise begins to shrink. The motivation as well as the ability to stretch the boundaries of imagination is curtailed. To rely purely on the efficiency of prediction obscures the need for understanding why and how. The risk is that everything we treasure about our culture and values will atrophy.

Moreover, in a world governed by predictive analytics there is neither a place nor any longer the need for accountability. When political power becomes unaccountable to those over whom it is exercised, we risk the destruction of liberal democracy. Accountability rests on a basic understanding of cause and effect. In a democracy, this is framed in legal terms and is an integral part of democratically legitimated institutions. If this is no longer guaranteed, surveillance becomes ubiquitous. Big data gets even bigger and data is acquired without understanding or explanation. We become part of a fine-tuned and interconnected predictive system that is dynamically closed upon itself. The human ability to teach to others what we know and have experienced begins to resemble that of a machine that can teach itself and invent the rules. Machines have neither empathy nor a sense of responsibility. Only humans can be held accountable and only humans have the freedom to take on responsibility.

Luckily, we have not arrived at this point as yet. We can still ask: Do we really want to live in an entirely predictable world in which predictive analytics invades and guides our innermost thoughts and desires? This would mean renouncing the inherent uncertainty of the future and replacing it with the dangerous illusion of being in control. Or are we ready to acknowledge that a fully predictable world is never achievable? Then we would have to muster the courage to face the danger that a falsely perceived deterministic world implies. This book has been written as an argument against the illusion of a wholly predictable world and for the courage – and wisdom – needed to live with uncertainty.

Obviously, my journey does not end there. ‘Life can only be understood backwards, but it must be lived forward.’ This quotation from Søren Kierkegaard awaits an interpretation in relation to our movements between online and offline worlds, between the virtual self, the imagined self and the ‘real’ self. How does one live forward under these conditions, given their opportunities and constraints? The quotation implies a disjunction between Life as an abstraction that transcends the personal, and living as the conscious experience that fills every moment of our existence. With the stupendous knowledge we now have about Life in all its diversity, forms and levels, about its origins in the deep past and its continued evolution, is not now the moment to bring this knowledge to bear on how to live forward? The human species has overtaken biological evolution whose product we still are. Science and technology have enabled us to move forward at accelerating speed along the pathways of a cultural evolution that we are increasingly able to shape.

And yet, here we are, facing a global sustainability crisis with many dire consequences and mounting geopolitical tensions. As I write, we are in the grip of a pandemic, with others to follow if the natural habitats of animals that carry zoonotic viruses capable of spreading to humans continue to be eroded. The deficiencies of our institutions, created in previous centuries and designed to meet challenges different from our own, stare us in the face. The spectre of social unrest and polarized societies has returned, when what is needed is greater social coherence, equality and social justice if we are to escape our current predicament.

We have embarked on a journey to live forward with predictive algorithms letting us see further ahead. Luckily, we have become increasingly aware of how crucial access to quality data of the right kind is. We are wary about the further erosion of our privacy and recognize that the circulation of wilful lies and hate speech on social media poses a threat to liberal democracy. We put our trust in AI while we also distrust it. This ambivalence is likely to last, for however smart the algorithms we entrust with agency when living forward in the digital age may be, they do not go beyond finding correlations.

Even the most sophisticated neural networks modelled on a simplified version of the brain can only detect regularities and identify patterns based on data that comes from the past. No causal reasoning is involved, nor does an AI pretend that it is. How can we live forward if we fail to understand Life as it has evolved in the past? Some computer scientists, such as Judea Pearl and others, deplore the absence of any search for cause–effect relationships. ‘Real intelligence’, they argue, involves causal understanding. If AI is to reach such a stage it must be able to reason in a counterfactual way. It is not sufficient merely to fit a curve along an indicated timeline. The past must be opened up in order to understand a sentence like ‘what would have happened if …’. Human agency consists in what we do, but understanding what we did in the past in order to make predictions about the future must always involve the counterfactual that we could have acted differently. In transferring human agency to an AI we must ensure that it has the capacity to ‘know’ this distinction that is basic to human reasoning and understanding (Pearl and Mackenzie 2018).

The power of algorithms to churn out practical and measurable predictions that are useful in our daily lives – whether in the management of health systems, in automated financial trading, in making businesses more profitable or expanding the creative industries – is so great that we easily sidestep or even forget the importance of the link between understanding and prediction. But we must not yield to the convenience of efficiency and abandon the desire to understand, nor the curiosity and persistence that underpin it (Zurn and Shankar 2020).

Two different ways of thinking about how to advance have long existed. One line of thought traces its lineage to the ancient fascination with automata and, more generally, to the smooth functioning of the machines that have fuelled technological revolutions, with their automated production lines devoted to increasing efficiency and lowering costs. This is where all the promises of automation enter, couched in wild technological dreams and imaginaries. Deep Learning algorithms will continue to equip computers with a statistical ‘understanding’ of language and thus expand their ‘reasoning’ capacity. There is confidence among AI practitioners that work on ethical AI is progressing well. The tacit assumption is that the dark side of digital technologies and all the hitherto unresolved problems will also be sorted out by an ultimate problem-solving intelligence, a kind of far-sighted, benign Leviathan fit to manage our worries and steer us through the conflicts and challenges facing humanity in the twenty-first century.

The other line of thinking insists that theoretical understanding is necessary and urgent, not only for mathematicians and computational scientists, but also for developing tools to assess the performance and output quality of Deep Learning algorithms and to optimize their training. This requires the courage to approach the difficult questions of ‘why’ and ‘how’, and to acknowledge both the uses and the limitations of AI. Since algorithms have huge implications for humans it will be important to make them fair and to align them with human values. If we can confidently predict that algorithms will shape the future, the question as to which kinds of algorithms will do the shaping is currently still open (Wigderson 2019).

Understanding also includes the expectation that we can learn how things work. If an AI system claims to solve problems at least as well as a human, then there is no reason not to expect and demand transparency and accountability from it. In practice, we are far from receiving satisfactory answers as to how the inner representations of AI work in sufficient detail, let alone an answer to the question of cause and effect. The awareness begins to sink in that we are about to lose something connected to what makes us human, as difficult to pin down as it is. Maybe the time has come to admit that we are not in control of everything, to humbly concede that our tenuous and risky journey of co-evolution with the machines we have built will be more fecund if we renew our attempt to understand our shared humanity and how we might live together better. We have to continue our exploration of living forward while trying to understand Life backwards and linking the two. Prediction will then no longer only map the trajectories of living forward for us, but will become an integral part of understanding how to live forward. Rather than foretelling what will happen, it will help us understand why things happen.

After all, what makes us human is our unique ability to ask the question: Why do things happen – why and how?

In AI We Trust

Подняться наверх