Читать книгу Trust in Computer Systems and the Cloud - Mike Bursell - Страница 13
What Is Agency?
ОглавлениеWhen you write a computer program that prints out “Hello, world!”, who is “saying” those words: you or the computer? This may sound like an idle philosophical question, but it is more than that: we need to be able to talk about entities as part of our definition of trust, and in order to do that, we need to know what entity we are discussing.
What exactly, then, does agency mean? It means acting for someone: being their agent—think of what actors' agents do, for example. When we engage a lawyer or a builder or an accountant to do something for us, we set very clear boundaries about what they will be doing on our behalf. This is to protect both us and the agent from unintended consequences. There exists a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent for another person or organisation. There are contracts and agreed restitutions—basically, punishments—for when things go wrong. Say that my accountant buys 500 shares in a bank with my money, and then I turn around and say that they never had the authority to do so: if we have set up the relationship correctly, it should be entirely clear whether or not the accountant had that authority and whose responsibility it is to deal with any fallout from that purchase.
The situation is not so clear when we start talking about computer systems and agents. To think a little more about this question, here are two scenarios:
In the classic film WarGames, David Lightman (Matthew Broderick's character) has a computer that goes through a list of telephone numbers, dialling them and then recording the number for later investigation if they are answered by another machine that attempts to perform a handshake. Do we consider that the automatic dialling Lightman's computer performs is carried out as an act with agency? Or is it when the computer connects to another machine? Or when it records the details of that machine? I suspect that most people would not argue that the computer is acting with agency once Lightman gets it to complete a connection and interact with the other machine—that seems very intentional on his part, and he has taken control—but what about before?
Google used to run automated programs against messages received as part of the Gmail service.5 The programs were looking for information and phrases that Google could use to serve ads. The company were absolutely adamant that they, Google, were not doing the reading: it was just the computer programs.6 Quite apart from the ethical concerns that might be raised, many people would (and did) argue that Google, or at least the company's employees, had imbued these automated programs with agency so that philosophically—and probably legally—the programs were performing actions on behalf of Google. The fact that there was no real-time involvement by any employee is arguably unimportant, at least in some contexts.
This all matters because in order to understand trust, we need to identify an entity to trust. One current example of this is self-driving cars: whose fault is it when one goes wrong and injures or kills someone? Equally, when the software in certain Boeing 737 MAX 8 aircraft malfunctioned,7 pilots—who can be said to have trusted the software—and passengers—who equally can be said to have trusted the pilots and their ability to fly the aircraft correctly—lost their lives. What exactly was the entity to which they had a trust relationship, and how was that trust managed?
Another example may help us to consider the question of context. Consider a hypothetical automated defence system for a military base in a war zone. Let us say that, upon identifying intruders via its cameras, the system is programmed to play a recording over loudspeakers, warning them to move away; and, in the case that they do not leave within 30 seconds of a warning, to use physical means up to and including lethal force to stop them proceeding any further. The base commander trusts the system to perform its job and stop intruders: a trust relationship exists between the base commander and the automated defence system. Thus, in the language of our definition of trust:
“The base commander holds an assurance that the automated defence system will identify, warn, and then stop intruders who enter the area within its camera and weapon range”.
We have a fair amount of context already embedded within this example. We stated up front that the base is in a war zone, and we have mentioned the range of the cameras and weapons. A problem arises, however, when the context changes. What if, for instance:
The base is no longer in a war zone, and rules of engagement change
Children enter the coverage area who do not understand the warnings or are unable to leave the area
A surge of refugees enters the area—so many that those at the front are unable to move, despite hearing and understanding the warning
These may seem to be somewhat contrived examples, but they serve to show how brittle trust relationships can be when contexts change. If the entity being trusted with defence of the base were a soldier, we would hope the soldier could be much more flexible in reacting to these sorts of changes, or at least know that the context had changed and protocol dictated contacting a superior or other expert for new orders. The same is not true for computer systems. They operate in specific contexts; and unless they are architected, designed, and programmed to understand not only that other contexts exist but also how to recognise changes in contexts and how their behaviour should change when they find themselves in a new context, then the trust relationships that other entities have with them are at risk. This can be thought of as an example of programmatically encoded bias: only certain contexts were considered in the design of the system, which means inflexibility is inherent in the system when other contexts are introduced or come into play.
In our example of the automated defence system, at least the base commander or empowered subordinate has the opportunity to realise that a change in context is possible and to reprogram or switch off the system: the entity who has the relationship to the system can revise the trust relationship. A much bigger problem arises when both entities are actually computing systems and the context in which they are operating changes or, just as likely, they are used in contexts for which they were not designed—or, put another way, in contexts their designers neglected to imagine. How to define such contexts, and the importance of identifying when contexts change, will feature prominently in later chapters.