Читать книгу Privacy in Mobile and Pervasive Computing - Florian Schaub - Страница 11

Оглавление

CHAPTER 2

Understanding Privacy

In order to be able to appropriately address privacy issues and challenges in mobile and pervasive computing, we first need to better understand why we—as individuals and as society—might want and need privacy. What does privacy offer? How does privacy affect our lives? Why is privacy necessary? Understanding the answers to these questions naturally helps to better understand what “privacy” actually is, e.g., what it means to “be private” or to “have privacy.” Only by examining the value of privacy, beyond our maybe intuitive perception of it, will we be able to understand what makes certain technology privacy invasive and how it might be designed to be privacy-friendly.

Privacy is a complex concept. Robert C. Post, Professor of Law and former dean of the Yale Law School, states that “[p]rivacy is a value so complex, so entangled in competing and contradictory dimensions, so engorged with various and distinct meanings, that I sometimes despair whether it can be usefully addressed at all” [Post, 2001]. In this chapter, we aim to untangle the many perspectives on and motivations for privacy. In order to better understand both the reasons for—and the nature of—privacy, we examine privacy from three perspectives. A first understanding comes from a historical overview of privacy, in particular from a legal perspective. Privacy law, albeit only one particular perspective on privacy, certainly is the most codified incarnation of privacy and privacy protections. Thus, it lends itself well as a starting point. Privacy law also has a rich history, with different approaches in different cultures and countries. The legal understanding of privacy has also changed substantially over the years, often because of technological advances. As we discussed in Chapter 1, technology and privacy are tightly intertwined, as technological innovations often tend to “change the playing field” in terms of making certain data practices and incursions on privacy possible that weren’t possible before. Our historic overview hence also includes key moments that prompted new views on what privacy constitutes.

Our second perspective on privacy then steps back from the codification of privacy and examines arguments for and against privacy—the motivation for protecting or curtailing privacy. This helps us to not only understand why we may want privacy, but also what we might lose without privacy. Is privacy something valuable worth incorporating into technology?

With both the historic backdrop and privacy motivations in mind, we then present contemporary conceptualizations of privacy. We will see that there are many views on what privacy is, which can make it difficult to understand what someone is referring to when talking about “privacy.” Precision is important when discussing privacy, in order to ensure a common understanding rather than arguing based on diverging perspectives on what privacy is or ought to be. The discussion of different conceptualizations and understandings of privacy is meant to help us evaluate the often nuanced privacy implications of new technologies.

2.1 CODIFYING PRIVACY

There is certainly no lack of privacy definitions—in fact, this whole chapter is about defining privacy in one way or another. However, at the outset, we take a look at definitions of privacy that have received broader societal support, i.e., by virtue of being actually enshrined in law. This is not meant as legal scholarship, but rather as an overview to what are considered fundamental aspects of privacy worth protecting.

2.1.1 HISTORICAL ROOTS

Privacy is hardly a recent fad. Questions of privacy have been in the focus of society for hundreds of years. In fact, references to privacy can already be found in the Bible, e.g., in Luke 12(2–3): “What you have said in the dark will be heard in the daylight, and what you have whispered in the ear in the inner rooms will be proclaimed from the roofs” [Carroll and Prickett, 2008]. The earliest reference in common law1 can be traced back to the English Justices of the Peace Act of 1361, which provided for the arrest of eavesdroppers and peeping toms [Laurant, 2003]. In 1763, William Pitt the Elder, at that time a member of the English parliament, framed in his speech on the Excise Bill the privacy of one’s home as follows [Brougham, 1839]:

The poorest man may in his cottage bid defiance to all the forces of the Crown. It may be frail—it’s roof may shake—the wind may blow through it—the storm may enter—the rain may enter—but the King of England cannot enter!—all his forces dare not cross the threshold of the ruined tenement.

One of the earliest explicit definitions of privacy came from the later U.S. Supreme Court Justice Louis Brandeis and his colleague Samuel Warren. In 1890, the two published the essay “The Right to Privacy” [Warren and Brandeis, 1890], which created the basis for privacy tort law2 in the U.S. legal system. They defined privacy as “the right to be let alone.” The fact that this definition is so often quoted can probably be equally attributed to it being the first legal text on the subject and being easily memorizable. While it encompasses in principle all of the cases mentioned previously, such as peeping toms, eavesdroppers, and trespassers, it is still a very limited definition of privacy. Warren and Brandeis’ defintion focuses on only one particular “benefit” of privacy: solitude. As we will see later in this chapter, privacy has other benefits beyond solitude.

Probably the most interesting aspect of Warren and Brandeis’ work from today’s perspective is what prompted them to think about the need for a legal right to privacy at the end of the 19th century:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right ‘to be let alone.’ …Numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops’ [Warren and Brandeis, 1890].


Figure 2.1: The Kodak Camera. George Eastman’s “Snap Camera” made it suddenly simple to take anybody’s image on a public street without their consent.

In this context, Warren and Brandeis’ quote of Luke 12(2–3) (in a translation slightly different from the Bible [Carroll and Prickett, 2008]) sounds like an prescient description of the new possibilities of mobile and pervasive computing. Clearly, neither the Evangelist Luke nor Warren and Brandeis had anything like modern mobile and pervasive computing in mind. In Warren and Brandeis’ case, however, it actually was a reference to a then novel technology—photography. Before 1890, getting one’s picture taken usually required visiting a photographer in their studio and sitting still for a considerable amount of time, otherwise the picture would be blurred. But on October 18, 1884, George Eastmann, the founder of the Eastman Kodak Company, received U.S.-Patent #306 594 for his invention of the modern photographic film. Instead of having to use a large tripod-mounted camera with heavy glass plates in the studio, everybody could now take Kodak’s “Snap Camera” (see Figure 2.1) out to the streets and take a snapshot of just about anybody—without their consent. It was this rise of unsolicited pictures, which more and more often found their way into the pages of the (at the same time rapidly expanding) tabloid newspapers, that prompted Warren and Brandeis to paint this dark picture of a world without privacy.

Today’s developments of smartphones, wearable devices, smart labels, memory amplifiers, and IoT-enabled smart “things” seem to mirror the sudden technology shifts experienced by Warren and Brandeis, opening up new forms of social interactions that change the way we experienced our privacy in the past. However, Warren and Brandeis’ “right to be let alone” looks hardly practical today: with the multitude of interactions in today’s world, we find ourselves constantly in need of dealing with people (or better: services) that do not know us in person, hence require some form of personal information from us in order to judge whether such an interaction would be beneficial. From opening bank accounts, applying for credit, obtaining a personal yearly pass for trains or public transportation, or buying goods online—we constantly have to “connect” with others (i.e., give out our personal information) in order to participate in today’s life. Even when we are not explicitly providing information about ourselves we constantly leave digital traces. Such traces range from what websites we visit or what news articles we read, to surveillance and traffic cameras recording our whereabouts, to our smartphones revealing our location to mobile carriers, app developers and advertisers. Preserving our privacy through isolation is just not as much of an option anymore as it was over a 100 years ago.

Privacy as a Right

Warren and Brandeis’ work put privacy on the legal map, yet it took another half century before privacy made further legal inroads. After the end of the Second World War, in which Nazi Germany had used detailed citizen records to identify unwanted subjects of all kinds [Flaherty, 1989], privacy became a key human right across a number of international treaties—the most prominent being the Universal Declaration of Human Rights, adopted by the United Nations in 1948, which states in its Article 12 that [United Nations, 1948]:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.

Similar protections can be found in Article 8 of the Council of Europe’s Convention of 1950 [Council of Europe, 1950], and again in 2000 with the European Union’s Charter of Fundamental Rights [European Parliament, 2000], which for the first time in the European Union’s history sets out in a single text the whole range of civil, political, economic, and social rights of European citizens and all persons living in the European Union [Solove and Rotenberg, 2003]. Article 8 of the Charter, concerning the Protection of Personal Data, states the following [European Parliament, 2000].

1. Everyone has the right to the protection of personal data concerning him or her.

2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

3. Compliance with these rules shall be subject to control by an independent authority.

The rise of the Internet and the World Wide Web in the early 1990s had prompted many to proclaim the demise of national legal frameworks, as their enforcement in a borderless cyberspace seemed difficult at least.3 However, the opposite effect could be observed: at the beginning of the 21st century, many national privacy laws have not only been adjusted to the technical realities of the Internet, but also received a substantial international harmonization facilitating cross-border enforcement.

Today, more than 100 years after Warren and Brandeis laid the foundation for modern data protection laws, two distinctive principles for legal privacy protection have emerged: the European approach of favoring comprehensive, all-encompassing data protection legislation that governs both the private and the public sector, and the sectoral approach popular in the United States that favors sector-by-sector regulation in response to industry-specific needs and concerns in conjunction with voluntary industry self-regulation. In both approaches, however, privacy protection is broadly modeled around what is known as “Fair Information Practice Principles.”

The Fair Information Practice Principles

If one would want to put a date to it, modern privacy legislation was probably born in the late 1960s and early 1970s, when governments first began to systematically make use of computers in administration. Alan Westin’s book Privacy and Freedom published in 1967 [Westin, 1967] had a significant impact on how policymakers in the next decades would address privacy. Clarke [2000] reports how a 1970 German translation of Westin’s book significantly influenced the world’s first privacy law, the “Datenschutzgesetz” (data protection law) of the West German state Hesse. In the U.S., a Westin-inspired 1973 report of the United States Department for Health Education and Welfare (HEW) set forth a code of Fair Information Practice (FIP), which has become a cornerstone of U.S. privacy law [Privacy Rights Clearinghouse, 2004], and has become equally popular worldwide. The five principles are as follows [HEW Advisory Committee, 1973].

1. There must be no personal data record keeping systems whose very existence is secret.

2. There must be a way for an individual to find out what information about him is in a record and how it is used.

3. There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without his consent.

4. There must be a way for an individual to correct or amend a record of identifiable information about him.

5. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.

In the early 1980s, the Organization for Economic Cooperation and Development (OECD) took up those principles and issued “The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data” [OECD, 1980], which expanded them into eight practical measures aimed at harmonizing the processing of personal data in its member countries. By setting out core principles, the organization hoped to “obviate unnecessary restrictions to transborder data flows, both on and off line.” The eight principles are as follows [OECD, 2013].4

1. Collection Limitation Principle. There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.

2. Data Quality Principle. Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.

3. Purpose Specification Principle. The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.

4. Use Limitation Principle. Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with the Purpose Specification principle except:

(a) with the consent of the data subject; or

(b) by the authority of law.

5. Security Safeguards Principle. Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data.

6. Openness Principle. There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity about usual residence of the data controller.5

7. Individual Participation Principle. Individuals should have the right:

(a) to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to them;

(b) to have communicated to them, data relating to them

i. within a reasonable time;

ii. at a charge, if any, that is not excessive;

iii. in a reasonable manner; and

iv. in a form that is readily intelligible to them;

(c) to be given reasons if a request made under subparagraphs (a) and (b) is denied, and to be able to challenge such denial; and

(d) to challenge data relating to them and, if the challenge is successful, to have the data erased; rectified, completed or amended.

8. Accountability Principle. A data controller should be accountable for complying with measures which give effect to the principles stated above.

Even though the OECD principles, just as the HEW guidelines before them, carried no legal obligation, they nevertheless constituted an important international consensus that substantially influenced national privacy legislation in many countries in the years to come [Solove and Rotenberg, 2003]. In what Michael Kirby, former Justice of the High Court in Australia, has called the “decade of privacy” [Clarke, 2006], many European countries (and the U.S.) followed the German state Hesse in passing comprehensive data protection laws—the first national privacy law was passed in Sweden in 1973, followed by the U.S. (Privacy Act of 1974, regulating processing of personal information by federal agencies), Germany (1977), and France (1978).

The FIPs, while an important landmark in privacy protection, are, however, not without their flaws. Clarke [2000] calls them a “movement that has been used by corporations and governments since the late 1960s to avoid meaningful regulation.” Instead of taking a holistic view on privacy, Clark finds the FIPs too narrowly focused on “data protection,” only targeting the “facilitation of the business of government and private enterprise” rather than the human rights needs that should be the real goal of privacy protection: “the principles are oriented toward the protection of data about people, rather than the protection of people themselves” [Clarke, 2006]. More concrete omissions of the FIPs are the complete lack of data deletion or anonymization requirements (i.e., after the data served its purpose), or the absence of clear limits on what could be collected and in what quantities (the FIPs only require that the data collected is “necessary”). Similarly, Cate [2006] notes that, in their translation into national laws, the broad and aspirational fair information practice principles have often been reduced to narrow legalistic concepts, such as notice, choice, access, security, and enforcement. These narrow interpretations of the FIPs focus on procedural aspects of data protection rather than the larger goal of protecting privacy for the benefit of individuals and society.

2.1.2 PRIVACY LAW AND REGULATIONS

Many countries have regulated privacy protections through national laws—often with reference to or based on the fair information practice principles. We provide an overview of those laws with a specific emphasis on the U.S. and Europe, due to their prominent roles in developing and shaping privacy law and their differing approaches for regulating privacy.

Privacy Law and Regulations in the United States

The U.S. Constitution does not lay out an explicit constitutional right to privacy. However, in a landmark case, Griswold vs. Connecticut 1965,6 the U.S. Supreme Court recognized a constitutional right to privacy, emanating from the First, Third, Fourth, Fifth, and Ninth Amendments of the U.S. Constitution.7 The First Amendment guarantees freedom of worship, speech, press, assembly and petition. Privacy under First Amendment protection usually refers to being unencumbered by the government with respect to one’s views (e.g., being able to speak anonymously or keeping one’s associations private). The Third Amendment provides that troops may not be quartered (i.e., allowed to reside) in private homes without the owner’s consent (an obvious relationship to the privacy of the home). The Ninth Amendment declares that the listing of individual rights is not meant to be comprehensive, i.e., that the people have other rights not specifically mentioned in the Constitution [National Archives]. The right to privacy is primarily anchored in the Fourth and Fifth Amendments [Solove and Rotenberg, 2003].

Fourth Amendment: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Fifth Amendment: No person shall be […] compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.

In addition, the Fourteenth Amendment’s due process clause has been interpreted to provide a substantive due process right to privacy.8

Fourteenth Ammendment: No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

While the U.S. Constitution recognizes an individual right to privacy, the constitution only describes the rights of citizens in relationship to their government, not to other citizens or companies9 [Cate, 1997]. So far, no comprehensive legal privacy framework exists in the United States that equally applies to both governmental and private data processors. Instead, federal privacy law and regulation follows a sectoral approach, addressing specific privacy issues that arise in certain public transactions or industry sectors [Solove and Schwartz, 2015].

Privacy with respect to the government is regulated by the Privacy Act of 1974, which only applies to data processing at the federal level [Gormley, 1992]. The Privacy Act roughly follows the Fair Information Principles set forth in the HEW report (mentioned earlier in this section), requiring government agencies to be transparent about their data collections and to support access rights. It also restricts what information different government agencies can share about an individual and allows citizens to sue the government for violating these provisions. Additional laws regulate data protection in other interactions with the government, such as the Driver’s Privacy Protection Act (DPPA) of 1994, which restricts states in disclosing or selling personal information from motor vehicle records, or the Electronic Communications Privacy Act (ECPA) of 1986, which extended wiretapping protections to electronic communication.

Privacy regulation in the private sector is largely based on self-regulation, i.e., industry associations voluntarily enact self-regulations for their sector to respect the privacy of their customers. In addition, federal or state privacy laws are passed for specific industry sectors in which privacy problems emerge. For instance, the Family Educational Rights and Privacy Act (FERPA) of 1974 regulates student privacy in schools and universities; and the Children’s Online Privacy Protection Act (COPPA) of 1998 restricts information collection and use by websites and online services for children under age 13.

The Health Insurance Portability and Accountability Act (HIPAA) of 1996 gives the Department of Health and Human Services rule making authority regarding the privacy of medical records. The HIPAA Privacy Rule requires privacy notices to patients, patient authorization for data processing and sharing, limits data processing to what is necessary for healthcare, gives patients data access rights, and prescribes physical and technical safeguards for health records. Commonly, federal privacy laws are amended over time to account for evolving privacy issues. For instance the Genetic Information Nondiscrimination Act (GINA) of 2008 limits the use of genetic information in health insurance and employment decisions.

Privacy in the financial industry is regulated by multiple laws. The Fair Credit Reporting Act (FCRA) of 1970 governs how credit reporting agencies can use consumer information. It has been most recently amended by the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, which, as a reaction to the 2017 Equifax Data Breach, gave consumers the right to free credit freezes to limit access to their credit reports and thus reduce the risk of identity theft. The Gramm-Leach-Bliley Act (GLBA) of 1999 requires that financial institutions store financial information in a secure manner, provide customers with a privacy notice annually and gives consumers the right to opt-out or limit sharing of personal information with third parties.

The Telephone Consumer Protection Act (TCPA) of 1991 provides remedies from repeat telephone calls by telemarketers and created the national Do Not Call registry.10 The Controlling the Assault of Non-Solicited Pornography And Marketing (CAN-SPAM) Act of 2003 created penalties for the transmission of unsolicited email and requires that email newsletters and marketing emails must contain an unsubscribe link. The Video Privacy Protection Act (VPPA) of 1988 protects the privacy of video rental records.

Those federal privacy laws are further complemented by state laws. For instance, many states have passed RFID-specific legislation that prohibits unauthorized reading of RFID-enabled cards and other devices (e.g., the state of Washington’s Business Regulation Chapter 19.300 [Washington State Legislature, 2009]). The state of Delaware enacted four privacy laws in 2015, namely the Online and Personal Privacy Protection Act (DOPPA), the Student Data Privacy Protection Act (SDPPA), the Victim Online Privacy Act (VOPA), and the Employee/Applicant Protection for Social Media Act (ESMA).

One of the more well-known state privacy laws is California’s Online Privacy Protection Act (CalOPPA) of 2004, which poses transparency requirements, including the posting of a privacy policy, for any website or online service that collects and maintains personally identifiable information from a consumer residing in California. Because California is the most populous U.S. state with a large consumer market and due to the difficulty of reliably determining an online user’s place of residence, CalOPPA, despite being a state law, affected almost all US websites as well as international websites. In 2018, California became the first US state to enact a comprehensive (i.e., non-sectoral) privacy law. The California Consumer Privacy Act of 2018, which will go into effect in 2020, requires improved privacy notices, a conspicuous opt-out button regarding the selling of consumer information, and grants consumers rights to data access, deletion and portability.

Due to the fractured nature of privacy legislation, privacy enforcement authority is also divided among different entities, including the Department of Health and Human services (for HIPAA), the Department of Education (for FERPA), State Attorneys General (for respective state laws), and the Federal Trade Commission (FTC). The FTC, as the U.S. consumer protection agency, has a prominent privacy enforcement role [Solove and Hartzog, 2014], including the investigation of deceptive and unfair trade practices with respect to privacy, as well as statutory enforcement (e.g., for COPPA). The FTC further has enforcement power with respect to Privacy Shield, the U.S.–European agreement for cross-border transfer. Due to its consumer protection charge, the FTC can also bring privacy-related enforcement actions against companies in industries without a sectoral privacy law [Solove and Hartzog, 2014], such as mobile apps, online advertising, or smart TVs. In addition to monetary penalties, FTC consent decrees typically require companies to submit to independent audits for 20 years and to establish a comprehensive internal security or privacy program. The FTC’s enforcement creates pressure for industries to adhere to their self-regulatory privacy promises and practices.

In addition to federal and state laws, civil privacy lawsuits (i.e., between persons or corporations) are possible. Prosser [1960] documented four distinct privacy torts common in US law,11 i.e., ways for an individual who felt their privacy has been violated to sue the violator for damages:

intrusion upon seclusion or solitude, or into private affairs;

public disclosure of embarrassing private facts;

adverse publicity which places a person in a false light in the public eye; and

appropriation of name of likeness.

In summary, privacy is protected in the U.S. by a mix of sector-specific federal and state laws, with self-regulatory approaches and enforcement by the FTC in otherwise unregulated sectors. An advantage of this sectoral approach is that resulting privacy laws are often specific to the privacy issues, needs, and requirements in a given sector, a downside is that laws are often surpassed by the advancement of technology, thus requiring periodical amendments.

Privacy Law and Regulation in the European Union

On the other side of the Atlantic, a more civil-libertarian perspective on personal data protection prevails. Individual European states began harmonizing their national privacy laws as early as the mid-1970s. In 1973 and 1974, the European Council12 passed resolutions (73)22 and (74)29, containing guidelines for national legislation concerning private and public databases, respectively [Council of Europe, 1973, 1974]. In 1985, the “Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data” (108/81) went into effect, providing a normative framework for national privacy protection laws of its member states [Council of Europe, 1981]. Convention 108/81 is open to any country to sign (i.e., not only CoE members), and has since seen countries like Uruguay, Mauritus, Mexico, or Senegal join.13 While the convention offered a first step toward an international privacy regime, its effect on national laws remained relatively limited [Mayer-Schönberger, 1998].

It was the 1995 Data Protection Directive 95/46/EC [European Parliament and Council, 1995] (in the following simply called “the Directive”) that achieved what Convention 108/81 set out to do, namely a lasting harmonization of the various European data protection laws and providing an effective international tool for privacy protection even beyond European borders.

The Directive had two important aspects that advanced its international applicability. On the one hand, it required all EU member states14 to enact national law that provided at least the same level of protection as the Directive stipulated. This European harmonization allowed for a free flow of information among all its member states, as personal data enjoyed the same minimum level of protection set forth by the Directive in any EU country.

On the other hand, the Directive’s Article 25 explicitly prohibited the transfer of personal data into “unsafe third countries,” i.e., countries with data protection laws that would not offer an adequate level of protection as required by the Directive. After European officials made it clear that they intended to pursue legal action against the European branch offices of corporations that would transfer personal data of EU residents to their corresponding headquarters in such unsafe third countries, a large number of non-European countries around the world began to adjust their privacy laws in order to become a “safe” country with regards to the Directive, and thus become part of the European Internal Information Market. Eventually, a dozen countries were considered “safe” third-countries with respect to personal data transfers: Andorra, Argentina, Canada, Switzerland, Faeroe Islands, the British Channel Islands (Guernsey, Jersey, Isle of Man), Israel, New Zealand, the U.S.,15 and Uruguay.

However, despite its significant impact, the 1995 Directive was woefully ignorant of the rapid technological developments of the late 1990s and early 2000s. It was created before the Web took off, before smartphones appeared, before Facebook and Twitter and Google were founded. It is not surprising then that many criticized it for being unable to cope with those realities [De Hert and Papakonstantinou, 2012]. While the Directive was specifically written to be “technology neutral,” it also meant that it was unclear how it would apply to many concrete technical developments, such as location tracking, Web cookies, online profiling, or cloud computing. In order to bring the European privacy framework more in line with the realities of mobile and pervasive computing, as well as to create a single data protection law that applies in all EU member states, an updated framework was announced in 2012 and finally enacted in early 2016—the General Data Protection Regulation (GDPR). The GDPR then went into effect on May 25, 2018. Its main improvements over the 1995 Directive can be summarized as follows [De Hert and Papakonstantinou, 2012, 2016].

1. Expanded Coverage: As per its Article 3, the GDPR now also applies to companies outside of the EU who offer goods or services to customers in the EU (“marketplace rule”)—the 1995 Directive only applied to EU-based companies (though it attempted to limit data flows to non EU-based companies).

2. Mandatory Data Protection Officers (DPO): Article 37 requires companies whose “core activities… require regular and systematic monitoring of data subjects on a large scale” to designate a DPO as part of their accountability program, who will be the main contact for overseeing legal compliance.

3. Privacy by Design: Article 25 requires that all data collection and processing must now follow a “data minimization” approach (i.e., collect only as much data as absolutely necessary), that privacy is provided by default, and that entities use detailed impact assessment procedures to evaluate the safety of its data processing.

4. Consent: Article 7 stipulates that those who collect personal data must demonstrate that it was collected with the consent of the data subject, and if the consent was “freely given.” For example, if a particular piece of data is not necessary for a service, but if the service is withheld from a customer otherwise, would not qualify as “freely given consent.”

5. Data Breach Notifications: Article 33 requires those who store personal data to notify national data protection authorities if they are aware of a “break-in” that might have resulted in personal data being stolen. Article 34 extends this to also notify data subjects if the breach “is likely to result in a high risk to the rights and freedoms of natural persons.”

6. New Subject Rights: Articles 15–18 give those whose data is collected more explicit rights, such as the right to object to certain uses of their data, the right to obtain a copy of the personal data undergoing processing, or the right to have personal data being deleted (“the right to be forgotten”).

How these changes will affect privacy protection in Europe and beyond will become clearer over the coming years. When the GDPR finally came into effect in May 2018, its most visible effect was a deluge of email messages that asked people to confirm that they still wanted to be on a mailing list (i.e., giving “unambiguous” consent, as per Article 4) [Hern, 2018, Jones, 2018], as well as a pronounced media backlash questioning both the benefits of the regulation [Lobo, 2018] as well as its (seemingly extraordinarily high) costs [Kottasová, 2018]. Many of the new principles in the GDPR sound simple, but can be challenging to implement in practice (e.g., privacy by design, the right to erasure). We will discuss some of these challenges in Chapter 6. Also, the above-mentioned Council of Europe “Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data” (108/81) [Council of Europe, 1981] has recently been updated [Council of Europe, 2018] and is now being touted as a first step for non-EU countries to receive the coveted status of a “safe third country” (adequacy assessment) [European Commission, 2017] with respect to the new GDPR [Greenleaf, 2018].

Privacy Law and Regulation in Other Countries

Beyond the U.S. and Europe, many countries have adopted data protection or privacy laws [Greenleaf, 2017, Swire and Ahmad, 2012]. An increasing number of countries have been adopting comprehensive data protection laws, which not just follow the Europan model, but are often based on EU Directive 95/46/EC or the GDPR. For instance, the data protection laws of Switzerland, Russia, and Turkey are similar to the EU Directive. Mexico’s 2010 Federal Law on the Protection of Personal Data Held by Private Entities also follows a comprehensive approach similar to the EU Directive, in particular with respect to data subjects’ rights, obligations of data controllers and processors, and international data transfer requirements. The Mexican law further incorporates the Habeas Data concept common in Latin American legal regimes [Swire and Ahmad, 2012]. Habeas Data refers to the constitutional right that citizens “may have the data” that is stored about them, i.e., they have the right to pose habeas data requests to entities to learn whether and what information is stored about them and request correction. The Mexican law requires data controllers to designate a contact for such requests and process them in a timely manner. The GDPR’s data portability right (Art. 20, GDPR) provides a similar right for data subjects and obligations for data controllers. In 2018, Brazil adopted the General Data Privacy Law (LGPD), which goes into effect in 2020. The LGPD closely mirrors the GDPR in its key provisions.

Canada also employs a comprehensive data protection approach. PIPEDA, the Personal Information Protection and Electronic Documents Act, regulates data protection for the private sector in Canada. A key difference between the GDPR and PIPEDA is that under PIPEDA individual informed consent is the only basis for lawful data collection, processing, and sharing, with limited exceptions [Banks, 2017].

Australia employs a co-regulatory model. Australia’s Federal Privacy Act defines National Privacy Principles for government agencies and the private sector. Industries then define self-regulatory codes that reflect the National Privacy Principles, with oversight by the Australian National Privacy Commissioner.

The Privacy Framework of the Asia-Pacific Economic Cooperation (APEC) aims to promote interoperability of privacy regimes across the 21 APEC countries. In contrast to Europe’s GDPR, the APEC Privacy Framework [APEC, 2017] is not a law but rather defines nine privacy principles, based on the OECD privacy guidelines, APEC countries can choose to subscribe to. The Framework further defines Cross-Border Privacy Rules (CBPR) as a code of conduct to enable cross-border data transfers among countries committing to the CBPR. The CBPR requires a local accountability agent (i.e., a governmental institution) that certifies organization’s CBPR compliance. As of 2018, six APEC countries are participating in CBPR, namely the U.S., Japan, Mexico, Canada, South Korea, and Singapore. In addition to the CBPR, the APEC Cross-border Privacy Enforcement Agreement (CPEA) facilitates cooperation and information sharing among APEC countries’ privacy enforcement authorities.

2.2 MOTIVATING PRIVACY

When the UK government in 1994 tried to rally support for its plans to significantly expand CCTV surveillance in Britain, it coined the slogan “If you’ve got nothing to hide, you’ve got nothing to fear” [Rosen, 2001]—a slogan that has been a staple in counter-privacy arguments ever since. What is so bad of having less privacy in today’s day and age, unless you are a terrorist, criminal, or scoundrel? Surely, people in Britain, with its over 6 million surveillance cameras (one for every 11 people) [Barrett, 2013] seem to be no worse off than, say, their fellow European neighbors in France or Germany, which both have nowhere near that many cameras.16 Would those who maintain an active Facebook page say they are worse off than those who only use email, text messages, or, say, written letters, to communicate with friends and family? Why not let Google monitor all Web searches and emails sent and received, so that it can provide better search results, a cleaner inbox, and more relevant targeted advertising, rather than the random spam that usually makes it into one’s inbox? Who would not want police and other national security institutions have access to our call records and search history in order to prevent terrorists and child molesters from planning and conducting their heinous crimes?

One might assume that making the case for privacy should be easy. Privacy is one of the leading consumer concerns on the Internet, dominating survey responses for more than 20 years now (e.g., Westin’s privacy surveys between 1990 and 2003 [Kumaraguru and Cranor, 2005], the 1999 IBM Multi-National Consumer Privacy Survey [IBM Global Services, 1999], or recent consumer reports from KPMG [2016] or International Data Corporation (IDC) [2017]). Everybody seems to want privacy. However, when separating preferences from actual behavior [Berendt et al., 2005, Spiekermann et al., 2001], most people in their everyday life seem to care much less about privacy than surveys indicate—something often called the “privacy paradox” [Norberg et al., 2007]. Facebook, with its long history of privacy-related issues [Parakilas, 2017], is still growing significantly every year, boasting over 2.23 billion “active monthly users”17 at the end of June 2018 [Facebook, Inc., 2018]. Back in 2013, with only about half that many active users (1.2 billion) [Facebook, Inc., 2018], Facebook users already shared almost 3.3 million pieces of content (images, posts, links) per minute [Facebook, Inc., 2013]. Within the same 60 s, Google serves an estimated 3.6 million search queries [James, 2017], each feeding into the profile of one of its over 1+ billion unique users18 in order to better integrate targeted advertising into their search results, Gmail inboxes, and YouTube videos. Of course, more privacy-friendly alternatives exist and they do see increasing users. For example, a service like the anonymous search engine DuckDuckGo saw its traffic double within days19 after Edward Snowden revealed the extent to which many Internet companies, including Google, were sharing data with the U.S. government. However, DuckDuckGo’s share of overall searches remains minuscule. Even though its share had been on the rise ever since the Snowden leaks of June 2013, its current20 11 million queries a day (roughly seven times its pre-Snowden traffic) are barely more than 0.3%21 of Google’s query traffic.

Why are not more people using a privacy-friendly search engine like DuckDuckGo? Does this mean people do not care about privacy? Several reasons come to mind. First, not many people may have heard about DuckDuckGo. Second, “traditional” search engines might simply provide superior value over their privacy-friendly competitors. Or maybe people simply think that they do. Given that the apparent cost of the services is the same (no direct charge to the consumer), the fact that one offers more relevant results than the other may be enough to make people not want to switch. Third, and maybe most important: indirect costs like a loss of privacy are notoriously hard to assess [Solove, 2013]. What could possibly happen if Yahoo, Microsoft, or Google know what one is searching? What is so bad about posting holiday pictures on Facebook or Instagram? Why would chatting through Signal22 be any better than through WhatsApp?23 Consider the following cases.

• In 2009, U.S. Army veteran turned stand-up comedian Joe Lipari had a bad customer experience in his local Apple store [Glass, 2010]. Maybe unwisely, Joe went home and took out his anger via a Facebook posting that quoted a line from the movie he started watching—Fight Club (based on the 1996 book by Palahniuk [1996]): “And this button-down, Oxford-cloth psycho might just snap, and then stalk from office to office with an Armalite AR-10 carbine gas-powered semi-automatic weapon, pumping round after round into colleagues and co-workers.” Lipari posted the slightly edited variant: “Joe Lipari might walk into an Apple store on Fifth Avenue with an Armalite AR-10 carbine gas-powered semi-automatic weapon and pump round after round into one of those smug, fruity little concierges.” An hour later, a full SWAT team arrived, apparently alerted by one of Joe’s Facebook contacts who had seen the posting and contacted homeland security. After a thorough search of his place and a three-hour interrogation downtown, Joe assumed that his explanation of this being simply a bad movie quote had clarified the misunderstanding. Yet four months later, Joe Lipari was charged with two “Class D” felonies—“PL490.20: Making a terroristic threat” [The State of New York, 2018b] and “PL240.60: Falsely reporting an incident in the first degree” [The State of New York, 2018a]—each carrying prison terms of 5–10 years. Two years and more than a dozen court appearances later the case was finally dismissed in February 2011.

• In 2012, Leigh Van Bryan and Emily Bunting, two UK residents just arriving in Los Angeles for a long-planned holiday, were detained in Customs and locked up for 12 h in a cell for interrogation [Compton, 2012]. Van Bryan’s name had been placed on a “One Day Lookout” list maintained by Homeland Security for “intending to come to the US to commit a crime,” while Bunting was charged for traveling with him. The source of this were two tweets Van Bryan had made several weeks before his departure. The first read “3 weeks today, we’re totally in LA pissing people off on Hollywood Blvd and diggin’ Marilyn Monroe up!”—according to Van Bryan a quote from his favorite TV show “Family Guy.” The second tweet read “@MelissaxWalton free this week, for quick gossip/prep before I go and destroy America?” Despite explaining that “destroying” was British slang for “party,” both were denied entry and put on the next plane back to the UK. Both were also told that they had been removed from the customary Visa Waiver program that is in place for most European passport holders and instead had to apply for visas from the U.S. Embassy in London before ever flying to the U.S. again [Hartley-Parkinson, 2012].

In both cases, posts on social media that were not necessarily secret, yet implicitly assumed to be for friends only, ended up being picked up by law enforcement, who did not appreciate the “playful” nature intended by the poster. Did Joe Lipari or Leigh Van Bryan do “something wrong” and hence had “something to hide”? If not, why should they have anything to fear?

“Knowledge is power” goes the old adage, and as these two stories illustrate, one aspect of privacy certainly concerns controlling the spread of information. Those who lose privacy will also lose control over some parts of their lives. In some cases, this is intended. For example, democracies usually require those in power to give up some of their privacy for the purpose of being held accountable, i.e., to control this power. Citizens routinely give up some of their privacy in exchange for law enforcement to keep crime at bay. In a relationship, we usually show our trust in one another by opening up and sharing intimate details, hence giving the other person power over us (as repeatedly witnessed when things turn sour and former friends or lovers start disclosing these details in order to embarrass and humiliate the other).

In an ideal world, we are in control of deciding who knows what about us. Obviously, this control will have limits: your parents ask you to call in regularly to say where you are; your boss might require you to “punch in/out” when you arrive at work and leave, respectively; the tax office may request a full disclosure on your bank accounts in order to compute your taxes; and police can search your house should they have a warrant24 from a judge.

In the following two sections we look at both sides of the coin: Why do we want privacy, and why might one not want it (in certain circumstances)? Some of the motivations for privacy will be distilled from the privacy laws we have seen in the previous section: what do these laws and regulations attempt to provide citizens with? What are the aims of these laws? By spelling out possible reasons for legal protection, we can try to better frame both the values and the limits of privacy. However, many critics argue that too much privacy will make the world a more dangerous place. Privacy should (and does) have limits, and we will thus also look at the arguments of those that think we should have less rather than more privacy.

2.2.1 PRIVACY BENEFITS

The fact that so many countries around the world have privacy legislation in place (over 120 countries in 2017 [Greenleaf, 2017]) clearly marks privacy as an important “thing” to protect, it is far from clear to what extent society should support individuals with respect to keeping their privacy. Statements by Scott McNealy, president and CEO of Sun Microsystems,25 pointing out that “you have no privacy anyway, get over it” [Sprenger, 1999], as well as Peter Cochrane’s editorial in Sovereign Magazine (when he was head of BT26 Research) claiming that “all this secrecy is making life harder, more expensive, dangerous and less serendipitous” [Cochrane, 2000], are representative of a large part of society that questions the point of “too much” secrecy (see our discussion in Section 2.2.2 below).

In his book Code and other Laws of Cyberspace [Lessig, 1999], Harvard law professor Lawrence Lessig tries to discern possible motivations for having privacy27 in today’s laws and social norms. He lists four major driving factors for privacy.

Privacy as empowerment: Seeing privacy mainly as informational privacy, its aim is to give people the power to control the dissemination and spread of information about themselves. A legal discussion surrounding this motivation revolves around the question whether personal information should be seen as a private property [Samuelson, 2000], which would entail the rights to sell all or parts of it as the owner sees fit, or as a “moral right,” which would entitle the owner to assert a certain level of control over their data even after they sold it.

Privacy as utility: From the data subject’s point of view, privacy can be seen as a utility providing more or less effective protection from nuisances such as unsolicited calls or emails, as well as more serious harms, such as financial harm or even physical harm. This view probably best follows Warren and Brandeis’ “The right to be let alone” definition of privacy, where the focus is on reducing the amount of disturbance for the individual, but can also be found, e.g., in U.S. tort law (see Section 2.1.1) or anti-discrimination laws.

Privacy as dignity: Dignity can be described as “the presence of poise and self-respect in one’s deportment to a degree that inspires respect” [Pickett, 2002]. This not only entails being free from unsubstantiated suspicions (for example when being the target of a wire tap, where the intrusion is usually not directly perceived as a disturbance), but rather focuses on the balance in information available between two people: analogous to having a conversation with a fully dressed person while being naked oneself, any relationship where there is a considerable information imbalance will make it much more difficult for those with less information about the other to keep their poise.

Privacy as constraint of power: Privacy laws and moral norms to that extend can also be seen as a tool for keeping checks and balances on a ruling elite’s powers. By limiting information gathering of a certain type, crimes or moral norms pertaining to that type of information cannot be effectively enforced. As Stuntz [1995] puts it: “Just as a law banning the use of contraceptives would tend to encourage bedroom searches, so also would a ban on bedroom searches tend to discourage laws prohibiting contraceptives” (as cited in Lessig [1999]).

Depending upon the respective driving factor, an individual might be more or less willing to give up part of their privacy in exchange for a more secure life, a better job, or a cheaper product. The ability of privacy laws and regulations to influence this interplay between government and citizen, between employer and employee, and between manufacturer or service provider and customer, creates a social tension that requires a careful analysis of the underlying motivations in order to balance the protection of the individual and the public good. An example of how a particular motivation can drive public policy is anti-spam legislation enacted both in Europe [European Parliament and Council, 2002] and in the U.S. [Ulbrich, 2003], which provides privacy-as-an-utility by restricting the unsolicited sending of e-mail. In a similar manner, in March 2004 the Bundesverfassungsgericht (the German Supreme Court) ruled that an 1998 amendment to German’s basic law enlarging law enforcements access to wire-tapping (“Der Grosse Lauschangriff”) was unconstitutional, since it violated human dignity [Der Spiegel, 2004].

This realization that privacy is more than simply providing secrecy for criminals is fundamental to understanding its importance in society. Clarke [2006] lists five broad driving principles for privacy.

Philosophical: A humanistic tradition that values fundamental human rights also recognizes the need to protect an individual’s dignity and autonomy. Protecting a person’s privacy is inherent in a view that values an individual for their own sake.

Psychological: Westin [1967] points out the emotional release function of privacy—moments “off stage” where individuals can be themselves, finding relief from the various roles they play on any given day: “stern father, loving husband, car-pool comedian, skilled lathe operator, unions steward, water-cooler flirt, and American Legion committee chairman.”

Sociological: Societies do not flourish when they are tightly controlled, as countries such as East Germany have shown. People need room for “minor non-compliance with social norms” and to “give vent to their anger at ‘the system,’ ‘city hall,’ ‘the boss’:”

The firm expectation of having privacy for permissible deviations is a distinguishing characteristic of life in a free society [Westin, 1967].

Economical: Clark notes that “all innovators are, by definition, ‘deviant’ from the norms of the time,” hence having private space to experiment is essential for a competitive economy. Similarly, an individual’s fear of surveillance—from both private companies and the state—will dampen their enthusiasm in participating in the online economy.

Political: The sociological need for privacy directly translates into political effects if people are not free to think and discuss outside current norms. Having people actively participate in political debate is a cornerstone of a democratic society—a lack of privacy would quickly produce a “chilling effect” that directly undermines this democratic process.

As Clarke [2006] points out, many of today’s data protection laws, in particular those drafted around the Fair Information Principles, are far from addressing all of those benefits, and instead rather focus on ensuring that the collected data is correct—not so much as to protect the individual but more so to ensure maximum economic benefits. The idea that privacy is more of an individual right, a right that people should be able to exercise without unnecessary burden, rather than simply an economic necessity (e.g., to make sure collected data is correct), is a relatively recent development. Representative for this paradigm shift was the so-called “census-verdict” of the German federal constitutional court (Bundesverfassungsgericht) in 1983, which extended the existing right to privacy of the individual (Persönlichkeitsrecht) with the right of self-determination over personal data (informationelle Selbstbestimmung) [Mayer-Schönberger, 1998].28 The judgment reads as follows.29

If one cannot with sufficient surety be aware of the personal information about oneself that is known in certain part of his social environment, …can be seriously inhibited in one’s freedom of self-determined planning and deciding. A society in which the individual citizen would not be able to find out who knows what when about them, would not be reconcilable with the right of self-determination over personal data. Those who are unsure if differing attitudes and actions are ubiquitously noted and permanently stored, processed, or distributed, will try not to stand out with their behavior. …This would not only limit the chances for individual development, but also affect public welfare, since self-determination is an essential requirement for a democratic society that is built on the participatory powers of its citizens [Reissenberger, 2004].

The then-president of the federal constitutional court, Ernst Benda, summarized his private thoughts regarding their decision as follows.30

Privacy in Mobile and Pervasive Computing

Подняться наверх