skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1917712

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there a moral difference between these two approaches? We first consider Deborah Hellman's view that the use of distinct racial tracks (but not distinct thresholds) does not constitute disparate treatment since the effects on individuals are indirect and does not rely on a racial generalization. We argue that this is mistaken: the use of different racial tracks seems both to have direct effects on and to rely on a racial generalization. We then offer an alternative understanding of the distinction between these two approaches—namely, that the use of different cut points is to the counterfactual comparative disadvantage, ex ante, of all white defendants, while the use of different racial tracks can in principle be to the advantage of all groups, though some defendants in both groups will fare worse. Does this mean that the use of cut points is impermissible? Ultimately, we argue, while there are reasons to be skeptical of the use of distinct cut points, it is an open question whether these reasons suffice to make a difference to their moral permissibility. 
    more » « less
  2. While the social and ethical risks of PAPM have been widely discussed, little guidance has been provided to police departments, community advocates, or to developers of place-based algorithmic patrol management systems (PAPM systems) about how to mitigate those risks. The framework outlined in this report aims to fill that gap. This document proposes best practices for the development and deployment of PAPM systems that are ethically informed and empirically grounded. Given that the use of place-based policing is here to stay, it is imperative to provide useful guidance to police departments, community advocates, and developers so that they can address the social risks associated with PAPM. We strive to develop recommendations that are concrete, practical, and forward-looking. Our goal is to parry critiques of PAPM into practical recommendations to guide the ethically sensitive design and use of data-driven policing technologies. 
    more » « less
  3. Abstract A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense “opaque”—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust in grounding the legitimacy of criminal justice institutions. We argue that algorithmic opacity threatens the trustworthiness of criminal justice institutions, which in turn threatens their legitimacy. We first offer an account of institutional trustworthiness before showing how opacity threatens to undermine an institution's trustworthiness. We then explore how threats to trustworthiness affect institutional legitimacy. Finally, we offer some policy recommendations to mitigate the threat to trustworthiness posed by the opacity problem. 
    more » « less
  4. Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the social distribution of the benefits and burdens of policing as well as the distinctive role of consent in determining fair distribution. When these normative factors are given their due attention, several requirements emerge for the fair implementation of predictive policing. Among these requirements are that police departments inform and solicit buy-in from affected communities about strategic decision-making and that departments favor non-enforcement-oriented interventions. 
    more » « less
  5. This paper synthesizes scholarship from several academic disciplines to identify and analyze five major ethical challenges facing data-driven policing. Because the term “data-driven policing” encompasses a broad swath of technologies, we first outline several data-driven policing initiatives currently in use in the United States. We then lay out the five ethical challenges. Certain of these challenges have received considerable attention already, while others have been largely overlooked. In many cases, the challenges have been articulated in the context of related discussions, but their distinctively ethical dimensions have not been explored in much detail. Our goal here is to articulate and clarify these ethical challenges, while also highlighting areas where these issues intersect and overlap. Ultimately, responsible data-driven policing requires collaboration between communities, academics, technology developers, police departments, and policy makers to confront and address these challenges. And as we will see, it may also require critically reexamining the role and value of police in society. 
    more » « less
  6. null (Ed.)
    Machine Learning has become a popular tool in a variety of applications in criminal justice, including sentencing and policing. Media has brought attention to the possibility of predictive policing systems causing disparate impacts and exacerbating social injustices. However, there is little academic research on the importance of fairness in machine learning applications in policing. Although prior research has shown that machine learning models can handle some tasks efficiently, they are susceptible to replicating systemic bias of previous human decision-makers. While there is much research on fair machine learning in general, there is a need to investigate fair machine learning techniques as they pertain to the predictive policing. Therefore, we evaluate the existing publications in the field of fairness in machine learning and predictive policing to arrive at a set of standards for fair predictive policing. We also review the evaluations of ML applications in the area of criminal justice and potential techniques to improve these technologies going forward. We urge that the growing literature on fairness in ML be brought into conversation with the legal and social science concerns being raised about predictive policing. Lastly, in any area, including predictive policing, the pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing. 
    more » « less