Читать книгу Trust in Computer Systems and the Cloud - Mike Bursell - Страница 34

The Dangers of Anthropomorphism

Оглавление

There is one last form of trust relationships from humans that we need to consider before we move on. It is not from humans to computer systems exactly, but from humans to computer systems that the humans believe to be other humans. The task of convincing humans that a computer system is human was suggested by Alan Turing,47 who was interested in whether machines can be said to think, in what has become known as the Turing Test (though he called it the Imitation Game). His focus arguably was more on the question of what the machine—we would say computer—was doing in terms of computation and less on the question of whether particular humans believed they were talking to another human.

The question of whether computers can—or may one day be able to—think was one of the questions that exercised early practitioners of the field of artificial intelligence (AI): specifically, hard AI. Coming at the issue from a different point of view, Rossi48 writes about concerns that humans have about AI. She notes issues such as explainability (how humans can know why AI systems make particular decisions), responsibility, and accountability in humans trusting AI. Her interests seem to be mainly about humans failing to trust—she does not define the term specifically—AI systems, whereas there is a concomitant, but opposite concern: that sometimes humans may have too much trust in (that is, have an unjustified trust relationship to) AI systems.

Over the past few years, AI/ML systems49 have become increasingly good at mimicking humans for specific interactions. These are not general-purpose systems but in most cases are aimed at participating in specific fields of interaction, such as telephone answering services. Targeted systems like this have been around since the 1960s: a famous program—what we would call a bot now—known as ELIZA mimicked a therapist. Interacting with the program—there are many online implementations still available, based on the original version—quickly becomes unconvincing, and it would be difficult for any human to consider that it is truly “thinking”. The same can be said for many systems aimed at specific interactions, but humans can be quite trusting of such systems even if they do not seem to be completely human. In fact, there is a strange but well-documented effect called the uncanny valley. This is the effect that humans feel an increasing affinity for—and presumably, an increased propensity to trust—entities, the more human they look, but only to a certain point. Past that point, the uncanny valley kicks in, and humans become less happy with the entity with which they are interacting. There is evidence that this effect is not restricted to visual cues but also exists for other senses, such as hearing and audio-based interactions.50 The uncanny valley seems to be an example of a cognitive bias that may provide us with real protection in the digital world, restricting the trust we might extend towards non-human trustees that are attempting to appear human. Our ability to realise that they are non-human, however, may not always be sufficient to allow it to kick in. Deep fakes, a common term for the output of specialised ML tools that generate convincing, but ultimately falsified, images, audio, or even full video footage of people, is a growing concern for many: not least social media sites, which have identified the trend as a form of potentially damaging misinformation, or those who believed that what they saw was real. Even without these techniques, it appears that media such as Twitter have been used to put messages out—typically around elections—that are not from real people, but that, without skilled analysis and correlation with other messages from other accounts, are almost impossible to discredit.

Anthropomorphism is a term to describe how humans often attribute human attributes to non-human entities. In our case, this would be computer systems. We may do this for a number of reasons:

 Maybe because humans have a propensity towards anthropomorphism in order to allow them better to understand the systems with which they interact, though they are not consciously aware that the system is non-human

 Because humans are interacting with a system that they are clear is non-human, but they find it easier to interact with it as if it had at least some human characteristics

 Because humans have been deceived by intentionally applied techniques into believing that the system is human

By this stage, we have maybe stretched the standard use of the term anthropomorphism beyond its normal boundaries: normal usage would apply to humans ascribing human characteristics to obviously non-human entities. The danger we are addressing here goes beyond that, as we are also concerned with the possibility that humans may form trust relationships to non-human entities exactly because they believe them to be human: they just do not have the ability (easily) to discriminate between the real and the generated.

Trust in Computer Systems and the Cloud

Подняться наверх