Designing Against Discrimination in Online Markets

This article provides a conceptual framework for understanding how platforms’ design and policy choices introduce opportunities for users’ biases to affect how they treat one another. Through empirical review of design-oriented interventions used by a range of platforms, and the synthesis of this review into a taxonomy of thematic categories, the authors hope to prompt greater reflection on the stakes of such decisions made by platforms already, guide platforms’ future decisions, and provide a basis for empirical work measuring the impacts of design decisions on discriminatory outcomes. In Part I, the  empirical review of platforms is described, and the strategies used to develop the taxonomy are presented. In Part II, the ten thematic categories that emerged from this review are detailed, and how platforms’ design interventions might mediate or exacerbate users’ biased behaviors is discussed. Part III describes the ethical dimensions of platforms’ design choices.

Abstract

Platforms that connect users to one another have flourished online in domains as diverse as transportation, employment, dating, and housing. When users interact on these platforms, their behavior may be influenced by preexisting biases, including tendencies to discriminate along the lines of race, gender, and other protected characteristics. In aggregate, such user behavior may result in systematic inequities in the treatment of different groups. While there is uncertainty about whether platforms bear legal liability for the discriminatory conduct of their users, platforms necessarily exercise a great deal of control over how users’ encounters are structured—including who is matched with whom for various forms of exchange, what information users have about one another during their interactions, and how indicators of reliability and reputation are made salient, among many other features. Platforms cannot divest themselves of this power; even choices made without explicit regard for discrimination can affect how vulnerable users are to bias. This Article analyzes ten categories of design and policy choices through which platforms may make themselves more or less conducive to discrimination by users. In so doing, it offers a comprehensive account of the complex ways platforms’ design choices might perpetuate, exacerbate, or alleviate discrimination in the contemporary economy.


Health Information Equity

This paper posits that the ability to collect and aggregate data about patients — including physical conditions, genetic information, treatments, responses, and outcomes — is changing medical research today. The author states that the collection of such information raises serious ethical concerns because it imposes special burdens on specific patients whose records form the data pool for queries and analyses. This article argues that laws should distribute information burdens across society in a just manner. Part I lays out how new laws and policies are facilitating the disproportionate collection and public use of data. Part II details the kinds of burdens such practices can impose. Part III provides an ethical framework to assess these inequities. Part IV then shows what regulatory and statutory levers can be used to render secondary research more equitable. Finally, the author outlines a framework to reorganize privacy risk in ways that are ethical and just. Where bioethics has sought only to incorporate autonomy concerns in health data collection, this framework provides a guide for moving beyond autonomy to equity concerns.

Abstract

In the last few years, numerous Americans’ health information has been collected and used for follow-on, secondary research. This research studies correlations between medical conditions, genetic or behavioral profiles, and treatments, to customize medical care to specific individuals. Recent federal legislation and regulations make it easier to collect and use the data of the low-income, unwell, and elderly for this purpose. This would impose disproportionate security and autonomy burdens on these individuals. Those who are well-off and pay out of pocket could effectively exempt their data from the publicly available information pot. This presents a problem which modern research ethics is not well equipped to address. Where it considers equity at all, it emphasizes underinclusion and the disproportionate distribution of research benefits, rather than overinclusion and disproportionate distribution of burdens.

I rely on basic intuitions of reciprocity and fair play as well as broader accounts of social and political equity to show that equity in burden distribution is a key aspect of the ethics of secondary research. To satisfy its demands, we can use three sets of regulatory and policy levers. First, information collection for public research should expand beyond groups having the lowest welfare. Next, data analyses and queries should draw on data pools more equitably. Finally, we must create an entity to coordinate these solutions using existing statutory authority if possible. Considering health information collection at a systematic level—rather than that of individual clinical encounters—gives us insight into the broader role that health information plays in forming personhood, citizenship, and community.


Equality of Opportunity in Supervised Learning

The authors of this paper use a case study of FICO credit scores to illustrate their notion that classification accuracy in supervised learning depends only on the joint statistics of the predictor and the protected attribute but not the interpretation of the individual features of the data. The study looks at the inherent limits of defining and identifying biases based on this notion and propose a criterion for producing discrimination against a specific sensitive attribute. They argue that one can optimally adjust any learned predictor to remove discrimination, according to their definition.

Abstract: We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.


Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots

This paper posits that it is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. In this article, the author explores the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? This article attempts to explore the ramifications of using such computers/robots in the future.

Abstract: Law enforcement currently uses cognitive computers to conduct predictive and content analytics and manage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligence and investigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligence and the widespread use of predictive analytics are the future tools of law enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.


Machine Learning: The Power and Promise of Computers That Learn by Example

This report by The Royal Society provides an excellent overview of machine learning, it’s potential and impact on society. Through this initiative, they sought to investigate the potential of machine learning over the next 5-10 years, and the barriers to realizing that potential. In doing so, the project engaged with key audiences — in policy, industry, academia and the public — to raise awareness of machine learning, understand views held by the public and contribute to the public debate about Machine Learning and identify the key social, ethical, scientific and technical issues it presents. Chapters five and six discuss the societal impact of Machine Learning which looks more closely at the privacy-focused challenges these technologies create both ethically and technologically.


Law and Regulation of Artificial Intelligence and Robots: Conceptual Framework and Normative Implications

In light of the many challenges that affect attempts to devise law and regulation in a context of technological inception, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. This paper addresses the following normative question: should a social planner adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the court system? The four sections review the main regulatory approaches proposed in the existing AI and robotic literature, discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications, specific areas of liability as a case-study and presents a possible methodology for the law and regulation of AIs and robots.

Abstract: Law and regulation of Artificial Intelligence (“AI”) and robots is emerging, fuelled by the introduction of industrial and commercial applications in society. A common thread to many regulatory initiatives is to occur without a clear or explicit methodological framework. In light of the many challenges that affect attempts to devise law and regulation in a context of technological incipiency, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. At bottom, the paper addresses the following normative question: should a social planer adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the courts system? To explore that question, the analysis is conducted under a public interest framework.

Section 1 reviews the main regulatory approaches proposed in the existing AI and robotic literature, and stresses their advantages and disadvantages. Section 2 discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications. Section 3 focuses on the specific area of liability as a case-study. Finally, Section 4 proposes a possible methodology for the law and regulation of AIs and robots. In conclusion, the paper proposes to index the regulatory response upon the nature of the externality – positive or negative – created by an AI application, and to distinguish between discrete, systemic and existential externalities.


Ethically Aligned Design

The IEEE Global Initiative provides the opportunity to bring together multiple voices in the Artificial Intelligence and Autonomous Systems communities to identify and find consensus on timely issues. This document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human well being. It includes eight sections, each addressing a specific topic related to AI/AS that has been discussed at length by a specific committee of The IEEE Global Initiative. Issues and candidate recommendations pertaining to these topics are listed in each committee section. The eight sections include; General Priciples, Embedding Values in Autonomous Intelligence Systems, Methodologies to Guide Ethical Research and Design, Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), Personal Data and Individual Access Control, Reframing Autonomous Weapons Systems, Economics/Humanitarian Issues, and Law.


Averting Robot Eyes

The authors argue that home robots will inevitably cause privacy harms while acknowledging that robots can provide beneficial services — as long as consumers trust them. This paper evaluates potential technological solutions that could help home robots keep their promises, avert their “eyes”, and otherwise mitigate privacy harms. The goal of the study is to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. Five principles for home robots and privacy design are proposed: data minimization, purpose specification, use limitation, honest anthropomorphism, and dynamic feedback and participation. Current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie, is also discussed.

Abstract: Home robots will cause privacy harms. At the same time, they can provide beneficial services — as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms.

We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.


Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI

This paper points out that while computer science has been performing large-scale experimentation for a long time, advances in artificial intelligence, novel autonomous systems for experimentation are raising complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. In this paper, the authors identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

Abstract: In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.


Big Data, Artificial Intelligence, Machine Learning and Data Protection

This discussion paper looks at the implications of big data, artificial intelligence (AI) and machine learning for data protection, and explains the ICO’s views on these. It defines big data, AI and machine learning, and identifies the particular characteristics that differentiate them from more traditional forms of data processing. Realizing the benefits that can flow from big data analytics, the paper analyzes the main implications for data protection. It examines some of the tools and approaches that can help organizations ensure that their big data processing complies with data protection requirements. Also discussed is how data protection, as enacted in current legislation, will not work for big data analytics, and how the role of accountability in relation to the more traditional principle of transparency will increase. The main conclusions are that, while data protection can be challenging in a big data context, the benefits will not be achieved at the expense of data privacy rights; and meeting data protection requirements will benefit both organizations and individuals. After the conclusions six key recommendations for organizations using big data analytics are presented.