Читать книгу Trust in Computer Systems and the Cloud - Mike Bursell - Страница 31

Trusting Others

Оглавление

Having considered the vexing question of whether we can trust ourselves, we should now turn our attention to trusting others. In this context, we are still talking about humans rather than institutions or computers, and we will be applying these lessons to computers and systems. What is more, as we noted when discussing cognitive bias, our assumptions about others—and the systems they build—will have an impact on how we design and operate systems involved with trust. Given the huge corpus of literature in this area, we will not attempt to go over much of it, but it is worth considering if there are any points we have come across already that may be useful to us or any related work that might cause us to sit back and look at our specific set of interests in a different light.

The first point to bear in mind when thinking about trusting others, of course, is all that we have learned from the discussions of cognitive bias in the previous section. In other words, other human entities are just as prone to cognitive bias as we are, and also just as unaware of it. Whenever we consider a trust relationship to another human, or consider a trust relationship that someone else has defined or designed—a relationship, for instance, that we are reviewing for them or another entity—then we have to realise not only that they may be acting irrationally but also that they are likely to believe that they are acting rationally, even given evidence to the contrary.40 Stepping away from the complexity of cognitive bias, what other issues should we examine when we consider whether we can trust other humans? We looked briefly, at the beginning of this chapter, at some of the definitions preferred in the literature around trust between humans, and it is clear both that there is too much to review here and also that much of it will not be relevant. Nevertheless, it is worth considering—as we have with regards to cognitive bias—if any particular concerns may be worthy of examination. We noted, when looking at the Prisoner's Dilemma, that some strategies are more likely to yield positive results than others. Axelrod's work noted that increasing opportunities for cooperation can improve outcomes, but given that the Prisoner's Dilemma sets out as one of its conditions that communication is not allowed, such cooperation must be tacit. Given that we are considering a wider set of interactions, there is no need for us to adopt this condition (and some of the literature that we have already reviewed seems to follow this direction), and it is worth being aware of work that specifically considers the impact when various parties are allowed to communicate.

One such study by Morton Deutsch and Robert M. Krauss41 looked at differences in bargaining when partners can communicate with each other or not (or unilaterally) and when they can threaten each other or not (or unilaterally). Their conclusions, brutally relevant during the Cold War period in which they were writing, were that bilateral positions of threat—where both partners could threaten the other—were “most dangerous” and that the ability to communicate made less difference than expected. This may lead to an extrapolation to non-human systems that is extremely important: that it is possible to build—hopefully unwittingly—positive feedback loops into automated systems that can lead to very negative consequences. Probably the most famous fictional example of this is the game Global Thermonuclear War played in the film WarGames,42 where an artificial intelligence connected to the US nuclear arsenal nearly starts World War III.

Schneier talks about the impact that moral systems may have on cooperation between humans and the possibly surprising positive impact that external events—such as terrorist attacks or natural disasters—tend to have on the tendency for humans to cooperate with each other.43 Moral systems are well beyond the scope of our interest, but there are some interesting issues associated with how to deal with rare and/or major events in terms of both design and attacks on trust relationships. We will return to these in Chapter 8, “Systems and Trust”.

Trust in Computer Systems and the Cloud

Подняться наверх