Читать книгу Trust in Computer Systems and the Cloud - Mike Bursell - Страница 35

Identifying the Real Trustee

Оглавление

When security measures are put in place, who puts them there, and for what reason? This might seems like a simple question, but often it is not. In fact, more important than asking “for what” are security measures put in place is the question “for whom are they put in place?” Ross Anderson and Tyler Moore are strong proponents of the study of security economics,51 arguing that microeconomics and game theory are vital studies for those involved in IT security.52 They are interested in questions such as the one we have just examined: where security measures—which will lead to what we termed behaviours—are put in place to benefit not the user interacting with the system but somebody else.

One example is Digital Rights Management (DRM). Much downloadable music or video media is “protected” from unauthorised use through the application of security technologies. The outcome of this is that people who download media that are DRM protected cannot copy them or play them on unapproved platforms or systems. This means, for example, that even if I have paid for access to a music track, I am unable to play it on a new laptop unless that laptop has approved software on it. What is more, the supplier from which I obtained the track can stop my previously authorised access to that track at any time (as long as I am online). How does this help me, the person interacting with the music via the application? The answer is that it does not help me at all but rather inconveniences me: the “protection” is for the provider of the music and/or the application. As Richard Harper points out, “trusting” a DRM system means trusting behaviour that enforces properties of the entity that commissioned it.53 Is this extra protection, which is basically against me, in that it stops my ease of use? Of course not: I, and other users of the service, will end up absorbing this cost through my subscription, a one-off purchase price, or my watching of advertisements as part of the service. This is security economics, where the entity benefiting from the security is not the one paying for it.

When considering a DRM system, it may be fairly clear what actions it is performing. In this case, this may include:

 Decrypting media ready for playing

 Playing the media

 Logging your usage

 Reporting your usage

According to our definition, we might still say that we have a trust relationship to the DRM software, and some of the actions it is performing are in my best interests—I do, after all, want to watch or listen to the media. If we think about assurances, then the trust relationship I have can still meet our definition. I have assurances of particular behaviours, and whether they are in my best interests or not, I know (let us say) what they are.

The issue gets murkier when I cannot necessarily discover what behaviour is happening, because if I cannot, then I have no way to know if it is in my best interests or not. One might even expect that if behaviours are in my best interests, they would be disclosed to me as part of the description of the actions about which I am deciding to accept assurances. When I have significant concerns that there are behaviours that are explicitly against my interests, things become concerning. A large-scale example of this is the trust relationship that governments need to have to critical national infrastructure. The exact definition of critical national infrastructure—often capitalised or abbreviated to CNI—varies between experts and countries but is the collection of core hardware, software, and services that are key to keeping citizens safe and key elements of society functioning. A list might include the following:

 Power generation

 Water and sewerage

 Basic transport networks

 Emergency services

 Healthcare

 Location services (e.g., GPS)

 Telecommunications

 Internet access

For the purposes of many governments, the final two have become so intertwined that they can hardly be separated. What is noteworthy about telecommunications and core Internet capabilities is the small number of suppliers across the world. One of those is Huawei, which is based in the People's Republic of China. The government of the United States, whose relationship with the Chinese state and government can be characterised as a rivalry, if not out-and-out enemies, takes the view that given the nature of the ownership of Huawei, and its base in China, the telecommunications equipment that it manufactures and provides cannot be trusted.

This is a strong stance to take, and the concerns that are expressed are well-defined. The US government asserts that there is a real risk that a telecommunications equipment—and associated software—provider who is based within China may be under enough pressure from the Chinese government to include hidden features that could affect the confidentiality, integrity, or availability of services that are part of the United States' critical national infrastructure. If this were the case, it would allow communications that could be critical to the United States to be eavesdropped on or even tampered with by the Chinese government or those acting for it. The suggestion that the Chinese government would ever exert pressure to insert such capabilities—typically known as back doors—is strongly disputed by the Chinese and Huawei itself. However, to frame these concerns within our definition of trust relationships as well as from the point of view of the US government, there is insufficient assurance that the actions to be taken by such pieces of equipment are as expected and, therefore, the US government has taken the view that there should be no trust relationship formed with equipment that might be supplied by Huawei.

This is an extreme example, but when we see relationships of this type, where there are or may be actions that are hidden from us, it must be appropriate to say that we cannot have assurance and, therefore, should not label this as a proper trust relationship. In order to be adequately informed about entities and whether to form relationships to them, we need to have as much information about actions as possible before a trust relationship is formed, along with assurances about those actions. The problem with this is that one of the key sources of information about an entity is the entity itself, but we cannot trust any information that an entity provides about itself because, of course, we have no trust relationship to it to allow us to do so. This issue and how to mitigate it will be key as we move to deeper examinations about trust between computer systems and discussions around the topics of application programming interfaces (APIs) and open source software.

Anderson and Tyler54 align this sort of effect with what economists call externalities. An externality is when there is a cost or benefit to a party who did not choose to incur that cost or benefit.55 Certainly, in the case of the US government's concerns about Huawei telecommunications equipment, any back-door type of behaviour would count as a cost, even if that cost were not directly economic. Let us consider computer systems more generally. Sometimes actions might be performed by an entity (the trustee) without any intention of harm—that is, cost—to the party trusting it (the trustor); but if the trustor does not know about these actions, they have no way to evaluate any possible impact. In this case, the trustor needs to make explicit requirements either to exclude specific actions or even to require that no other actions will be performed beyond the expected ones. This second course of action may seem like the obvious one to take but is actually very difficult.

Many applications, when running, will perform actions that are not core to the functioning of the program itself, which we might call side effects. At the API level, there is a more formal use of this phrase, where actions are performed on data or variables that are not “local” to the function or operator being called. The general case where non-core actions are performed provides us with enough real concern. Two typical examples, which are recurring problems for IT security, will serve to illustrate the problem.

It is a truism that computer programs do not always function as they are designed. For that reason, log files are often collected to allow those who are tasked with managing the programs to understand any problems and maybe to feed back to those who designed and wrote the programs any bugs that are identified. Such logging is generally associated with the actions of the application; but, equally, logging may be performed on the data that is being entered, manipulated, and generated or on user logins and interactions, to allow someone auditing the application and its usage to track how it is being used.56 The danger with which we are concerned is that information is being recorded in logs that should not be. This situation may mean that those who have legitimate access to a particular set of logs also get access to information that is not appropriate. A well-known example of this would be application logs designed to help a developer debug a payment application that logs credit card details, exposing them to the developer. The user of the site has a trust relationship to the application where they should expect that such information is not exposed but has no knowledge of the fact that this logging is taking place nor the ability to control or stop it.

Similar problems can occur with backup files. These differ from log files in that they are not intended for consumption by anything other than the application that may need to recover in the event of a problem, but the files need to contain enough data and information for the application to recover all of the state that it needs to continue operation. There is definitely a possible cost here to the user of an application if these files are accessed by unauthorised parties, but at least in this case, there is a possible benefit, too: the application can continue to be used. The question is whether this benefit outweighs the possible cost and, more specifically, whether the trustor even has the ability to make a choice as to whether backup files are stored or has enough information to make an informed choice as to whether they should be. While backup files on a local system are typically accessible to a user—though not always, nor always advertised—the likelihood of this being the case for remote or multi-user systems is significantly reduced. It would be good—that is, in my interests—if I, as trustor, were given the option to back up my own data in this case and insist that any backups generated by the trustee be anonymised or have any critical data removed. But even if I can insist on this, the chances that I can realistically enforce it are very low.

This is, in fact, another example of security economics: the backups are put in place not for my benefit but for the benefit of the entity operating the application or service I am using. Even if I have visibility into the actions they are performing, I have little or no chance or opportunity to influence them in my favour. Sometimes, despite the inability of individuals to have an impact on the practices of those whose services they are using, governments or other regulatory bodies put in place measures that force service providers to adopt practices that do benefit individuals. Good examples of this in the area we have been describing are the European Union's General Data Protection Regulation (GDPR) and the State of California's California Consumer Privacy Act (CCPA), both of which force service providers to protect consumers' data and put in place measures to prevent it from being misused. A slightly weaker type of protection, but one that can help, is the establishment of industry standards aimed at promoting good practice. Historically, however, standards have ended up benefiting industry players—service providers—rather than consumers or customers, who rarely have much—if any—representation on standards bodies.

In our definition of trust, we started with the following statement:

"Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.

It turns out that establishing that assurance can be more difficult than might be expected and that the performance of actions may also need to specify the non-performance of other actions to ensure that we can fully understand what behaviours we, the trustor, are trusting the other entity, the trustee, to perform. In the next chapter, we will examine trust in even more detail, the impact of different forms of trust, how trust is expressed, and some of the alternatives that may be appropriate in certain contexts.

Trust in Computer Systems and the Cloud

Подняться наверх