GDPR and the Internet of Things: Guidelines to Protect Users’ Identity and Privacy

Presented in this paper is a three-step transparency model based on known privacy risks of the IoT, the GDPR’s governing principles, and weaknesses in its relevant provisions. In an effort to help IoT developers and data controllers, eleven ethical guidelines are proposed focused on how information about the functionality of the IoT should be shared with users above the GDPR’s legally binding requirements. There are two case studies presented that demonstrate how the guidelines apply in practice: IoT in public spaces and connected cities, and connected cars.

Abstract

The Internet of Things (IoT) requires pervasive collection and linkage of user data to provide personalised experiences based on potentially invasive inferences. Consistent identification of users and devices is necessary for this functionality, which poses risks to user privacy. The forthcoming General Data Protection Regulation (GDPR) contains numerous provisions relevant to these risks, which may nonetheless be insufficient to ensure a fair balance between users’ and developers’ interests. A three-step transparency model is described based on known privacy risks of the IoT, the GDPR’s governing principles, and weaknesses in its relevant provisions. Eleven ethical guidelines are proposed for IoT developers and data controllers on how information about the functionality of the IoT should be shared with users above the GDPR’s legally binding requirements. Two use cases demonstrate how the guidelines apply in practice: IoT in public spaces and connected cities, and connected cars.


Pre-Formulated Declarations of Data Subject Consent – Citizen-Consumer Empowerment and the Alignment of Data, Consumer and Competition Law Protections

This article examines how the respective data protection and privacy, consumer protection, and competition law policy agendas are aligned by looking through the lens of pre-formulated declarations of consent whereby data subjects agree to the processing of their personal data by accepting standard terms. The authors describe the role each area has as it relates to the GDPR and ePrivacy Directive, the Unfair Terms Directive, the Consumer Rights Directive and the proposed Digital Content Directive in addition to market dominance. This paper discusses the complicated issue of the economic value of personal data and tries to interpret the affects of this cross-reference.

Abstract

The purpose of this article is to examine the alignment of the respective data protection and privacy, consumer protection and competition law policy agendas through the lens of pre-formulated declarations of consent whereby data subjects agree to the processing of their personal data by accepting standard terms. The article aims to delineate the role of each area with specific reference to the GDPR and ePrivacy Directive, the Unfair Terms Directive, the Consumer Rights Directive and the proposed Digital Content Directive in addition to market dominance. Competition law analysis is explored vis-à-vis whether it could offer indicators of when ‘a clear imbalance’ in controller-data subject relations may occur in the context of the requirement for consent to be ‘freely given’ as per its definition in the GDPR. This complements the data protection and consumer protection analysis which focuses on the specific reference to the Unfair Terms Directive in Recital 42 GDPR stating that pre-formulated declarations of consent should not contain unfair terms. Attention is paid to various interpretative difficulties stemming from this alignment between the two instruments. In essence, this debate circles the thorny issue of the economic value of personal data and thus tries to navigate the interpretation minefield left behind by the cross-reference.


Data Portability and Data Control: Lessons for an Emerging Concept in EU Law

This article observes that while Article 20 of the GDPR introduces the right to data portability, it is agnostic as it relates to how this data can be used once transferred. The authors state that unlike other initiatives, the right to data portability does not create ownership control over the ported data. How this regulation will be limited in that it may clash with the intellectual property rights of some current data holders (i.e. copyright, trade secrets etc) is discussed. The authors argue that as other regimes try to replicate the right to data portability, they should consider the resulting control, its breadth and its impact on incentives to innovate.

Abstract

The right to data portability (‘RtDP’) introduced by Article 20 of the General Data Protection Regulation (‘GDPR’) is a first regulatory attempt to establish a general-purpose control mechanism of horizontal application which mainly aims to facilitate reuse of personal data held by private companies. Article 20 GDPR is agnostic about the type of use that follows from the ported data and its further diffusion. This contrast with forms of portability facilitated under competition law which can only occur for purpose-specific goals with the aim of addressing anticompetitive behaviour. Unlike some upcoming initiatives, the RtDP still cannot be said to create ownership-like control over ported data. Even more, this regulatory innovation will be limited in its aspirations where intellectual property rights of current data holders, such as copyright, trade secrets and sui generis database rights, cause the two regimes to clash. In such cases, a reconciliation of the interests might confine particularly the follow-on use of ported data again to specific set of socially justifiable purposes, possibly with schemes of fair remuneration. We argue that to the extent that other regimes will try to replicate the RtDP, they should closely consider the nature of the resulting control, its breadth and its impact on incentives to innovate. In any case, the creation of data portability regimes should not become an end in itself. With an increasing number of instruments, orchestrating the consistency of legal regimes within the Digital Single Market and their mutual interplay should become an equally important concern.


Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation

This papers analyzes the GDPR’s “right to explanation.” The authors make a clear distinction between different levels of information and of consumers’ awareness;  they propose a new concept — algorithmic “legibility” — focused on combining transparency and comprehensibility.

The authors argue that a systemic interpretation is needed in this field. They show how a systemic interpretation of Articles 13–15 and 22 GDPR is necessary and recommend a “legibility test” that data controllers should perform in order to comply with the duty to provide meaningful information about the logic involved in automated decision-making.

Abstract

The aim of this contribution is to analyse the real borderlines of the ‘right to explanation’ in the GDPR and to discretely distinguish between different levels of information and of consumers’ awareness in the ‘black box’ society. In order to combine transparency and comprehensibility we propose the new concept of algorithm ‘legibility’.

We argue that a systemic interpretation is needed in this field, since it can be beneficial not only for individuals but also for businesses. This may be an opportunity for auditing algorithms and correcting unknown machine biases, thus similarly enhancing the quality of decision-making outputs.

Accordingly, we show how a systemic interpretation of Articles 13–15 and 22 GDPR is necessary, considering in particular that: the threshold of minimum human intervention required so that the decision-making is ‘solely’ automated (Article 22(1)) can also include nominal human intervention; the envisaged ‘significant effects’ on individuals (Article 22(1)) can encompass as well marketing manipulation, price discrimination, etc; ‘meaningful information’ that should be provided to data subjects about the logic, significance and consequences of decision-making (Article 15(1)(h)) should be read as ‘legibility’ of ‘architecture’ and ‘implementation’ of algorithmic processing; trade secret protection might limit the right of access of data subjects, but there is a general legal favour for data protection rights that should reduce the impact of trade secrets protection.

In addition, we recommend a ‘legibility test’ that data controllers should perform in order to comply with the duty to provide meaningful information about the logic involved in an automated decision-making.


Meaningful Information and the Right to Explanation

The authors believe the discourse about the right to explanation has, thus far, gone in an unproductive direction. The authors posit that there is a fierce disagreement over whether these provisions create a data subject’s ‘right to explanation’. This article attempts to reorient that debate by showing that the plain text of the GDPR supports such a right. The authors believe that the right to explanation should be interpreted functionally, flexibly, and at a minimum, enable a data subject to exercise his or her rights under the GDPR and human rights law. To make their point, they offer a critique of the two most prominent papers in the debate.

Abstract

There is no single, neat statutory provision labelled the right to explanation in Europe’s new General Data Protection Regulation (GDPR). But nor is such a right illusory.

Responding to two prominent papers that, in turn, conjure and critique the right to explanation in the context of automated decision-making, we advocate a return to the text of the GDPR.

Articles 13–15 provide rights to meaningful information about the logic involved in automated decisions. This is a right to explanation, whether one uses the phrase or not.

The right to explanation should be interpreted functionally, flexibly, and should, at a minimum, enable a data subject to exercise his or her rights under the GDPR and human rights law.


The Importance of Privacy by Design and Data Protection Impact Assessments in Strengthening Protection of Children’s Personal Data Under the GDPR

Abstract

This paper explores to what extent the current illusion of autonomy and control by data subjects, including children and parents, based on consent can potentially be mitigated, or even reversed, by putting more emphasis on other tools of protection and empowerment in the GDPR and their opportunities for children. Suggestions are put forward as to how the adoption of such tools may enhance children’s rights and how they could be put into practice by DPAs and data controllers.


Artificial Intelligence Policy: A Primer and Roadmap

This paper provides a roadmap (not the road) to the major policy questions presented by AI today. The goal of the essay is to give sufficient detail to describe the challenge of AI without providing the policy outcome. It discusses the contemporary policy environment around AI and the key challenges it presents including: justice and equality; use of force; safety and certification; privacy and power; and taxation and displacement of labor.  As it relates to privacy in particular, the author posits that the acceleration of artificial intelligence, which is intimately tied to the availability of data, will play a significant role in this evolving conversation in at least two ways: (1) the problem of pattern recognition and (2) the problem of data parity.

Abstract

Talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind’s “final invention.”

This essay, prepared in connection with UC Davis Law Review’s 50th anniversary symposium, explains why AI is suddenly on everyone’s mind and provides a roadmap to the major policy questions AI raises. The essay is designed to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiate their own exploration.

Topics covered include: Justice and equity; Use of force; Safety and certification; Privacy (including data parity); and Taxation and displacement of labor. In addition to these topics, the essay will touch briefly on a selection of broader systemic questions: Institutional configuration and expertise; Investment and procurement; Removing hurdles to accountability; and Correcting mental models of AI.


The Public Information Fallacy

The goal of this article is to highlight the many possible meanings of “public” and make the case to clarify the concept in privacy law. The main thesis is that because there are so many different possible interpretations of “public information,” the concept cannot be used to justify data practices and surveillance without first articulating a more precise meaning that recognizes the values affected. The author believes the law of public information has failed to clarify whether the concept is a description, a designation, or just another way of saying something is “not private.” In this document, a review of the law and discourse of public information is provided, a survey of the law and literature to propose three different ways to conceptualize “public information” is discussed and finally, a case for clarity is made.

Abstract

The concept of privacy in “public” information or acts is a perennial topic for debate. It has given privacy law fits. People struggle to reconcile the notion of protecting information that has been made public with traditional accounts of privacy. As a result, successfully labeling information as public often results in a free pass for surveillance and personal data practices. It has also given birth to a significant and persistent misconception—that public information is an established and objective concept.

In this article, I argue that the “no privacy in public” justification is misguided because nobody even knows what “public” even means. It has no set definition in law or policy. This means that appeals to the public nature of information and contexts in order to justify data and surveillance practices is often just guesswork. Is the criteria for determining publicness whether it was hypothetically accessible to anyone? Or is public information anything that’s controlled, designated, or released by state actors? Or maybe what’s public is simply everything that’s “not private?”

The main thesis of this article is that if the concept of “public” is going to shape people’s social and legal obligations, its meaning should not be assumed. Law and society must recognize that labeling something as public is both consequential and value-laden. To move forward, we should focus the values we want to serve, the relationships and outcomes we want to foster, and the problems we want to avoid.


The Undue Influence of Surveillance Technology Companies on Policing

This essay identifies three recent examples in which surveillance technology companies have exercised undue influence over policing: stingray cellphone surveillance, body cameras, and big data programs. By “undue influence,” the author is referring to the commercial self-interest of surveillance technology vendors that overrides principles of accountability and transparency normally governing the police. The article goes on to examine the harms that ensue when this influence goes unchecked, and suggests some means by which oversight can be imposed on these relationships.

Abstract

Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples — stingray cellphone surveillance, body cameras, and big data software—demonstrate varieties of this undue influence. The companies which provide these technologies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.


Transatlantic Data Privacy Law

In this paper, the authors state that because of data restrictions of two major EU mandates, bridging the transatlantic data divide is a matter of the greatest significance. On the horizon is a possible international policy solution around “interoperable,” or shared legal concepts. President Barack Obama and the Federal Trade Commission (FTC) promoted this approach. The extent of EU–U.S. data privacy interoperability, however, remains to be seen. In exploring this issue, this article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ image of the individual as bearer of legal interests.

Abstract

International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non-EU country—including the United States—that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance.

In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “data subjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace.

This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU–U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures.