Читать книгу Trust in Computer Systems and the Cloud - Mike Bursell - Страница 14

Trust and Security

Оглавление

Another important topic in our discussion of trust is security. Our core interest, of course, is security in the realm of computing systems, sometimes referred to as cyber-security or IT security. But although security within the electronic and online worlds has its own peculiarities and specialities, it is generally derived from equivalent or similar concepts in “real life”: the non-electronic, human-managed world that still makes up most of our existence and our interactions, even when the interactions we have are “digitally mediated” via computer screens and mobile phones. When we think about humans and security, there is a set of things that we tend to identify as security-related, of which the most obvious and common are probably stopping humans going into places they are not supposed to visit, looking at things they are not supposed to see, changing things they are not supposed to alter, moving things that they are not supposed to shift, and stopping processes that they are not supposed to interrupt. These concepts are mirrored fairly closely in the world of computer systems:

 Authorisation: Stopping entities from going into places

 Confidentiality: Stopping entities from looking at things

 Integrity: Stopping entities from moving and altering things

 Availability: Stopping entities from interrupting processes

Exactly what constitutes a core set of security concepts is debatable, but this is a reasonably representative list. Related topics, such as identification and authentication, allow us to decide whether a particular person should be stopped or allowed to perform certain tasks; and categorisation allows us to decide which things which humans are allowed to alter, or which places they may enter. All of these will be useful as we begin to pick apart in more detail how we define trust.

Let us look at one of these topics in a little more detail, then, to allow us to consider its relationship to trust. Specifically, we will examine it within the context of computing systems.

Confidentiality is a property that is often required for certain components of a computer system. One oft-used example is when I want to pay for some goods over the Web. When I visit a merchant, the data I send over the Internet should be encrypted; the sign that it is encrypted is typically the little green shield or padlock that I see on the browser bar by the address of the merchant. We will look in great detail at this example later on in the book, but the key point here is that the data—typically my order, my address, and my credit card information—is encrypted before it leaves my browser and decrypted only when it reaches the merchant. The merchant, of course, needs the information to complete the order, so I am happy for the encryption to last until it reaches their server.

What exactly is happening, though? Well, a number of steps are involved to get the data encrypted and then decrypted. This is not the place for a detailed description,8 but what happens at a basic level is that my browser and the merchant's server use a well-understood protocol—most likely HTTP + SSL/TLS—to establish enough mutual trust for an encrypted exchange of information to take place. This protocol uses algorithms, which in turn employ cryptography to do the actual work of encryption. What is important to our discussion, however, is that each cryptographic protocol used across the Internet, in data centres, and by governments, banks, hospitals, and the rest, though different, uses the same cryptographic “pieces” as its building blocks. These building blocks are referred to as cryptographic primitives and range from asymmetric and symmetric algorithms through one-way hash functions and beyond. They facilitate the construction of some of the higher-level concepts—in this case, confidentiality— which means that correct usage of these primitives allows for systems to be designed that make assurances about certain properties.

One lesson we can learn from the world of cryptography is that while using it should be easy, designing cryptographic algorithms is often very hard. While it may seem simple to create an algorithm or protocol that obfuscates data—think of a simple shift cipher that moves all characters in a given string “up” one letter in the alphabet—it is extremely difficult to do it well enough that it meets the requirements of real-world systems. An oft-quoted dictum of cryptographers is, “Any fool can create a cryptographic protocol that they can't defeat”; and part of learning to understand and use cryptography well is, in fact, the experience of designing such protocols and seeing how other people more expert than oneself go about taking them apart and compromising them.

Let us return to the topics we noted earlier: authorisation, integrity, etc. None of them defines trust, but we will think of them as acting as building blocks when we start considering trust relationships in more detail. Like the primitives used in encryption, these concepts can be combined in different ways to allow us to talk about trust of various kinds and build systems to model the various trust relationships we need to manage. Also like cryptographic primitives, it is very easy to use these primitives in ways that do not achieve what we wish to achieve and can cause confusion and error for those using them.

Why is all of this important? Because trust is important to security. We typically use security to try to enforce trust relationships because humans are not, sadly, fundamentally trustworthy. This book argues that computing systems are not fundamentally trustworthy either, but for somewhat different reasons. It would be easy to think that computing systems are neutral with regard to trust, that they just sit there and do what they do; but as we saw when we looked briefly at agency, computers act for somebody or something, even when the actions they take are unintended9 or not as intended. Equally, they may be maliciously or incompetently directed (programmed or operated). But worst, and most common of all, they are often—usually—unconsciously and implicitly placed into trust relationships with other systems, and ultimately humans and organisations, often outside the contexts for which they were designed. The main goal of this book is to encourage people designing, creating, and operating computer systems to be conscious and explicit in their actions around trust.

Trust in Computer Systems and the Cloud

Подняться наверх