Issue 1: Algorithms: Privacy Risk and Accountability

May 2017

Notes from FPF

Through academic, policy, and industry circles, making progress on the cluster of issues related to algorithmic accountability has become a leading priority. The inaugural issue of the Future of Privacy Forum’s Privacy Scholarship Reporter provides a clear and compelling look into some of the most worrisome problems and promising solutions.

Although not everyone is familiar with the specific concept of algorithmic accountability, it covers well-known topics, at least broadly speaking. We all know that debate exists over what individuals and organizations should do to be responsible for their actions and intentions. Some of the discussion focuses on moral accountability and some of it deals with legal liability. Of course, some of the crucial conversations address both issues. After all, moral arguments can be levers that lead to legal reform, especially when the rapid pace of technological development strains historically rooted legal reasoning that’s become out-of-sync with the disruptive times.

To think about algorithmic accountability is to consider situations where decision-making has been delegated to computers. Today, algorithms recommend all kinds of seemingly innocuous things, from what to watch and how to efficiently drive to a destination. But they also play potent roles in socially charged outcomes. For example, algorithms affect what prices different people pay when they shop. Algorithms influence when loans and insurance are withheld. Algorithms impact how people consume news and form political opinions. Algorithms play a role in determining who gets placed on the government’s no-fly list, who is subject to heightened government surveillance, which prisoners are offered parole, how self-driving cars make life-or-death decisions, how facial recognition technology identifies criminals, and how online advertisers detect when consumers are feeling vulnerable. This is a very partial list of algorithmic powers. The reach continues to grow as artificial intelligence matures and big data sets become increasingly inexpensive to procure, create, store, and subject to speedy and fine-tuned analysis.

When we closely examine the impact of algorithms in different contexts, it becomes clear that they are given varying levels of power and autonomy. Algorithms inform human judgment. Algorithms mediate human judgment. And algorithms displace human judgment. Algorithms can be advisors, information shapers, and executives. And algorithms even can be soothsayer that try to foretell what will happen in the future. Unsurprisingly, the wide scope and breadth of algorithmic impact have engendered strong hopes and fears.

What you’ll find in this issue are smart takes on some of the fundamental questions of algorithmic fairness. How can transparency norms be properly applied to opaque algorithmic processes that can seem inscrutable due to computational complexity or intellectual property protections? How can transparency norms be prevented from creating problems concerning privacy and unfairness? How can unfair algorithmic processes be identified and redressed? How can predictive analytics be properly used, given the fact that they’re only making inferences about potentialities? How can implicit and explicit biases be prevented from polluting computations that are represented as objective calculations? How can appropriate opt-in standards be created and enforced when algorithmic surveillance, sorting, and sentencing is becoming ubiquitous? And how can algorithmic analysis of sensitive information enhance social welfare? Are there important scholarship missing from our list? Send your comments or feedback to fpf@fpf.org. We look forward to hearing from you.

Evan Selinger, FPF Senior Fellow