skip to main content


Title: Fairness in Algorithmic Policing
Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the social distribution of the benefits and burdens of policing as well as the distinctive role of consent in determining fair distribution. When these normative factors are given their due attention, several requirements emerge for the fair implementation of predictive policing. Among these requirements are that police departments inform and solicit buy-in from affected communities about strategic decision-making and that departments favor non-enforcement-oriented interventions.  more » « less
Award ID(s):
1917712
NSF-PAR ID:
10320730
Author(s) / Creator(s):
Date Published:
Journal Name:
Journal of the American Philosophical Association
ISSN:
2053-4477
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Racial bias in predictive policing algorithms has been the focus of a number of recent news articles, statements of concern by several national organizations (e.g., the ACLU and NAACP), and simulation-based research. There is reasonable concern that predictive algorithms encourage directed police patrols to target minority communities with discriminatory consequences for minority individuals. However, to date there have been no empirical studies on the bias of predictive algorithms used for police patrol. Here, we test for such biases using arrest data from the Los Angeles predictive policing experiments. We ind that there were no significant differences in the proportion of arrests by racial-ethnic group between control and treatment conditions. We ind that the total numbers of arrests at the division level declined or remained unchanged during predictive policing deployments. Arrests were numerically higher at the algorithmically predicted locations. When adjusted for the higher overall crime rate at algorithmically predicted locations, however, arrests were lower or unchanged. 
    more » « less
  2. Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been shown susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which reported incidents of crime (those reported by residents) and discovered incidents of crime (i.e those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest. 
    more » « less
  3. Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which reported incidents of crime (those reported by residents) and discovered incidents of crime (i.e. those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest. 
    more » « less
  4. Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual’s group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low. 
    more » « less
  5. This paper synthesizes scholarship from several academic disciplines to identify and analyze five major ethical challenges facing data-driven policing. Because the term “data-driven policing” encompasses a broad swath of technologies, we first outline several data-driven policing initiatives currently in use in the United States. We then lay out the five ethical challenges. Certain of these challenges have received considerable attention already, while others have been largely overlooked. In many cases, the challenges have been articulated in the context of related discussions, but their distinctively ethical dimensions have not been explored in much detail. Our goal here is to articulate and clarify these ethical challenges, while also highlighting areas where these issues intersect and overlap. Ultimately, responsible data-driven policing requires collaboration between communities, academics, technology developers, police departments, and policy makers to confront and address these challenges. And as we will see, it may also require critically reexamining the role and value of police in society. 
    more » « less