Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the social distribution of the benefits and burdens of policing as well as the distinctive role of consent in determining fair distribution. When these normative factors are given their due attention, several requirements emerge for the fair implementation of predictive policing. Among these requirements are that police departments inform and solicit buy-in from affected communities about strategic decision-making and that departments favor non-enforcement-oriented interventions.
more »
« less
Does Predictive Policing Lead to Biased Arrests? Results From a Randomized Controlled Trial
Racial bias in predictive policing algorithms has been the focus of a number of recent news articles, statements of concern by several national organizations (e.g., the ACLU and NAACP), and simulation-based research. There is reasonable concern that predictive algorithms encourage directed police patrols to target minority communities with discriminatory consequences for minority individuals. However, to date there have been no empirical studies on the bias of predictive algorithms used for police patrol. Here, we test for such biases using arrest data from the Los Angeles predictive policing experiments. We ind that there were no significant differences in the proportion of arrests by racial-ethnic group between control and treatment conditions. We ind that the total numbers of arrests at the division level declined or remained unchanged during predictive policing deployments. Arrests were numerically higher at the algorithmically predicted locations. When adjusted for the higher overall crime rate at algorithmically predicted locations, however, arrests were lower or unchanged.
more »
« less
- Award ID(s):
- 1737770
- PAR ID:
- 10058706
- Date Published:
- Journal Name:
- Statistics and public policy
- ISSN:
- 2330-443X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Large-scale policing data is vital for detecting inequity in police behavior and policing algorithms. However, one important type of policing data remains largely unavailable within the United States: aggregated police deployment data capturing which neighborhoods have the heaviest police presences. Here we show that disparities in police deployment levels can be quantified by detecting police vehicles in dashcam images of public street scenes. Using a dataset of 24,803,854 dashcam images from rideshare drivers in New York City, we find that police vehicles can be detected with high accuracy (average precision 0.82, AUC 0.99) and identify 233,596 images which contain police vehicles. There is substantial inequality across neighborhoods in police vehicle deployment levels. The neighborhood with the highest deployment levels has almost 20 times higher levels than the neighborhood with the lowest. Two strikingly different types of areas experience high police vehicle deployments — 1) dense, higher-income, commercial areas and 2) lower-income neighborhoods with higher proportions of Black and Hispanic residents. We discuss the implications of these disparities for policing equity and for algorithms trained on policing data.more » « less
-
Jackson, Jonathan (Ed.)Explanations for police misconduct often center on a narrow notion of “problem officers,” the proverbial “bad apples.” Such an individualistic approach not only ignores the larger systemic problems of policing but also takes for granted the group-based nature of police work. Nearly all of police work is group-based and officers’ formal and informal networks can impact behavior, including misconduct. In extreme cases, groups of officers (what we refer to as, “crews”) have even been observed to coordinate their abusive and even criminal behaviors. This study adopts a social network and machine learning approach to empirically investigate the presence and impact of officer crews engaging in alleged misconduct in a major U.S. city: Chicago, IL. Using data on Chicago police officers between 1971 and 2018, we identify potential crews and analyze their impact on alleged misconduct and violence. Results detected approximately 160 possible crews, comprised of less than 4% of all Chicago police officers. Officers in these crews were involved in an outsized amount of alleged and actual misconduct, accounting for approximately 25% of all use of force complaints, city payouts for civil and criminal litigations, and police-involved shootings. The detected crews also contributed to racial disparities in arrests and civilian complaints, generating nearly 18% of all complaints filed by Black Chicagoans and 14% of complaints filed by Hispanic Chicagoans.more » « less
-
Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement.more » « less
-
Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been shown susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which reported incidents of crime (those reported by residents) and discovered incidents of crime (i.e those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest.more » « less
An official website of the United States government

