Читать книгу Privacy in Mobile and Pervasive Computing - Florian Schaub - Страница 10

Оглавление

CHAPTER 1

Introduction

In 1999, Robert Rivera slipped on some spilled yogurt in a Vons supermarket in Southern California. With a shattered kneecap as a result, Rivera sought compensation from the supermarket chain—not only to pay for his medical bills, but also to compensate for the loss of income, as he had to quit his job due to the injury. However, his effort to negotiate an out-of-court settlement fell short, according to the LA Times [Silverstein, 1999], when the supermarket’s designated mediator produced Rivera’s shopping records. Rivera was a regular Vons customer and had used their loyalty card for several years. The mediator made it clear that should this case go to court, Vons could use Rivera’s shopping record to demonstrate that he regularly bought large quantities of alcohol—a fact that would surely weaken his case (who is to say that Rivera wasn’t drunk when he slipped?). While Vons denied any wrongdoings, Rivera claimed that this threat prompted him to drop the case against the company.

Shopping records are a great example of the minute details that companies are interested in collecting about their customers. At first glance, it looks like a good deal: in exchange for swiping a loyalty card at the checkout,1 consumers receive anywhere from small discounts to substantial savings on their daily grocery shopping bill. The privacy implications seem negligible. After all, the store already has a record of all items you are buying right there at checkout, so why worry about the loyalty card that helps you save money? While the difference is not obvious, the loyalty card allows for much more detailed data collection than just the payment transaction. Even though it seems as if a regular credit card not issued by the store or other cashless payment methods would be just as problematic, data flows for such cards are different: the supermarket only receives information about successful payment, but no direct identifying information about the customer; similarly, the credit card company learns that a purchase of a certain amount was made at the supermarket, but not what items were purchased. Only by also swiping a loyalty card or using a combined credit-and-loyalty card, a store is able to link a customer’s identity to a particular shopping basket and thus track and analyze their shopping behavior over time.

So what is the harm? Most of us might not regularly buy “large quantities” of alcohol, so we surely would never run into the problem of Robert Rivera, where our data is used “against us”. Take the case of the U.S.-American firefighter Philip Scott Lyons. A long-time customer of the Safeway supermarket chain, Lyons was arrested in August 2004 and charged with attempted arson [Schneier, 2005]. Someone had tried to set fire to Lyons’ house. The fire starter found at the scene matched fire starters Lyons had previously purchased with his Safeway Club Card. Did he start the fire himself? Luckily for Lyons, all charges against him were eventually dropped in January 2005, when another person confessed to the arson attempt. Yet for over six months, Lyons was under heavy suspicion of having set fire to his own home—a suspicion particularly damaging for a firefighter! A similar incident occurred in 2004 in Switzerland, when police found a supermarket-branded contractor’s tool at the scene of a fire in the Canton of Berne. The local court forced the corresponding supermarket chain, Migros, to release the names of all 113 individuals who had bought such a tool in their stores. Eventually, all 113 suspects were removed from the investigation, as no single suspicion could be substantiated [20 Minuten].

In both the Safeway and the Migros cases, all customers who had bought the suspicious item in question (fire starters and a contractor’s tool, respectively) instantly became suspects in a criminal investigation. All were ultimately acquitted of the charges against them, although particularly in the case of firefighter Lyons, the tarnished reputation that goes with such a suspicion is hard to rebuild. News stories tend to focus on suspects rather than less exciting acquittals—the fact that one’s name is eventually cleared might not get the same attention as the initial suspicion. It is also often much easier to become listed in a police database as a suspect, than to have such an entry removed again after an acquittal. For example, until recently, the federal police in Switzerland would only allow the deletion of such an entry if the suspect would bring forward clear evidence of their innocence. If, however, a suspect had to be acquitted simply through lack of evidence to the contrary—as in the case of the Migros tool—the entry would remain [Rehmann, 2014].

The three cases described above are examples of privacy violations, even though none of the data disclosures (Vons’ access of Robert Rivera’s shopping records, or the police access of the shopping records in the US or in Switzerland) were illegal. In all three cases, data collected for one purpose (“receiving store discounts”) was used for another purpose (as a perceived threat to tarnish one’s reputation, or as an investigative tool to identify potential suspects). All supermarket customers in these cases thought nothing about the fact that they used their loyalty cards to record their purchases—after all, what should be so secret about buying liquor (perfectly legal if you are over 21 in the U.S.), fire starters (sold in the millions to start BBQs all around the world) or work tools? None of the customers involved had done anything wrong, yet the data recorded about them put them on the defensive until they could prove their innocence.

A lot has happened since Rivera and Lyons were “caught” in their own data shadow—the personal information unwittingly collected about them in companies’ databases. In the 10–15 years since, technology has continued to evolve rapidly. Today, Rivera might use his Android phone to pay for all his purchases, letting not only Vons track his shopping behavior but also Google. Lyons instead might use Amazon Echo2 to ask Alexa, Amazon’s voice assistant, to order his groceries from the comfort of his home—giving police yet another shopping record to investigate. In fact, voice activation is becoming ubiquitous: many smartphones already feature “always-on” voice commands, which means they effectively listen in on all our conversations in order to identify a particular activation keyword.3 Any spoken commands (or queries) are sent to a cloud server for analysis and are often stored indefinitely. Many other household devices such as TVs and game consoles4 or home appliances and cars5 will soon do the same.

It is easy to imagine that a future populated with an ever-increasing number of mobile and pervasive devices that record our minute goings and doings will significantly expand the amount of information that will be collected, stored, processed, and shared about us by both corporations and governments. The vast majority of this data is likely to benefit us greatly—making our lives more convenient, efficient, and safer through custom-tailored services that anticipate what we need, where we need it, and when we need it. But beneath all this convenience, efficiency, and safety lurks the risk of losing control and awareness of what is known about us in the many different contexts of our lives. Eventually, we may find ourselves in a situation like Rivera or Lyons, where something we said or did will be misinterpreted and held against us, even if the activities were perfectly innocuous at the time. Even more concerning, while in the examples we discussed privacy implications manifested as an explicit harm, more often privacy harms manifest as an absence of opportunity, which may go unnoticed even though it may substantially impact our lives.

1.1 LECTURE GOALS AND OVERVIEW

In this book we dissect and discuss the privacy implications of mobile and pervasive computing technology. For this purpose, we not only look at how mobile and pervasive computing technology affects our expectations of—and ability to enjoy—privacy, but also look at what constitutes “privacy” in the first place, and why we should care about maintaining it.

A core aspect is the question: what do we actually mean when we talk about “privacy?” Privacy is a term that is intuitively understood by everyone, but at the same time the actual meaning may differ quite substantially—among different individuals, but also for the same individual in different situations [Acquisti et al., 2015]. In the examples we discussed above, superficially, the hinging problems were the interpretation or misinterpretation of facts (Robert Rivera allegedly being an alcoholic and Philip Lyons being wrongfully accused of arson, based on their respective shopping records), but ultimately the real issue is the use of personal information for purposes not foreseen (nor authorized) originally. In those examples, privacy was thus about being “in control”—or, more accurately, the loss of control—of one’s data, as well as the particular selection of facts known about oneself. However, other—often more subtle—issues exist that may rightfully be considered “privacy issues” as well. Thus, in this Synthesis Lecture we first closely examine the two constituents of the problem—privacy (Chapter 2) and mobile and pervasive computing technology (Chapter 3)—before discussing their intersection and illustrating the resulting challenges (Chapter 4). We finally discuss how those privacy challenges can potentially be addressed in the design of mobile and pervasive computing technologies (Chapter 5), and conclude with a summary of our main points (Chapter 6).

1.2 WHO SHOULD READ THIS

When one of the authors of this lecture was a Ph.D. student (some 15 years ago), he received a grant to visit several European research projects that worked in the context of a large EU initiative on pervasive computing—the “Disappearing Computer Initiative” [Lahlou et al., 2005]. The goal of this grant was to harness the collective experience of dozens of internationally renowned researchers that spearheaded European research in the area, in order to draft a set of “best practices” for creating future pervasive services with privacy in mind. In this respect, the visits were a failure: almost none of the half a dozen projects visited had any suggestions for building privacy-friendly pervasive systems. However, the visits surfaced an intriguing set of excuses why, as computer scientists and engineers working in the area, privacy was of no concern to them.

1. Some researchers found it best if privacy concerns (and their solutions) would be regulated socially, not technically: “It’s maybe about letting [users of pervasive technology]find their own ways of cheating.”

2. A large majority of researchers found that others where much more qualified (and required) to think about privacy: “For [my colleague] it is more appropriate to think about [security and privacy] issues. It’s not really the case in my case.”

3. Another large number of researchers thought of privacy issues simply as a problem that could (at the end) be solved trivially: “All you need is really good firewalls.”

4. Several researchers preferred not to think about privacy at all, as this would interfere with them building interesting systems: “I think you can’t think of privacy… it’s impossible, because if I do it, I have troubles with finding [a] Ubicomp future.”

With such excuses, privacy might never be incorporated into mobile and pervasive systems. If privacy is believed to be impossible, someone else’s problem, trivial, or not needed, it will remain an afterthought without proper integration into the algorithms, implementations, and processes surrounding mobile and pervasive computing systems. This is likely to have substantial impact on the adoption and perception of those technologies. Furthermore, privacy laws and regulation around the world require technologists to pay attention to and mitigate privacy implications of their systems.

The prime target audience of this lecture are hence researchers and practitioners working in mobile and pervasive computing who want to better understand and account for the nuanced privacy implications of the technology they are creating, in order to avoid falling for the fallacies above. A deep understanding of potential privacy implications will help in addressing them early on in the design of new systems.

At the same time, researchers working in the areas of privacy and security in general—but without a background in mobile and pervasive systems—might want to read this lecture in order to learn about the core properties and the specific privacy challenges within the mobile and pervasive computing domains. Last but not least, graduate and undergraduate students interested in the area might want to read this synthesis lecture to get an overview and deeper understanding of the field.

1If one uses a store-issued credit card, even that extra step disappears.

2Amazon Echo is an example of a class of wireless “smart” speakers that listen and respond to voice commands (see https://www.amazon.com/echo/); Google Home is a similar product from Google (see https://store.google.com/product/google_home).

3All major smartphone platforms support such voice commands since 2015: Apple’s Siri, Google Assistant, and Microsoft Cortana.

4Samsung TVs and the Xbox One were early devices that supported always-on voice recognition [Hern, 2015].

5At CES 2017, multiple companies presented voice-activated home and kitchen appliances powered by Amazon Alexa and multiple car manufactures announced integration of Amazon Alexa or Google Assistant into their new models [Laughlin, 2017].

Privacy in Mobile and Pervasive Computing

Подняться наверх