To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems. 
                        more » 
                        « less   
                    
                            
                            Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice
                        
                    
    
            Abstract A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense “opaque”—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust in grounding the legitimacy of criminal justice institutions. We argue that algorithmic opacity threatens the trustworthiness of criminal justice institutions, which in turn threatens their legitimacy. We first offer an account of institutional trustworthiness before showing how opacity threatens to undermine an institution's trustworthiness. We then explore how threats to trustworthiness affect institutional legitimacy. Finally, we offer some policy recommendations to mitigate the threat to trustworthiness posed by the opacity problem. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1917712
- PAR ID:
- 10341529
- Date Published:
- Journal Name:
- Public Affairs Quarterly
- Volume:
- 36
- Issue:
- 2
- ISSN:
- 0887-0373
- Page Range / eLocation ID:
- 136 to 162
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            To date, most criminal justice research on COVID-19 has examined the rapid spread within prisons. We shift the focus to reentry via in-depth interviews with formerly incarcerated individuals in central Ohio, specifically focusing on how criminal justice contact affected the pandemic experience. In doing so, we use the experience of the pandemic to build upon criminological theories regarding surveillance, including both classic theories on surveillance during incarceration as well as more recent scholarship on community surveillance, carceral citizenship, and institutional avoidance. Three findings emerged. First, participants felt that the total institution of prison “prepared” them for similar experiences such as pandemic-related isolation. Second, shifts in community supervision formatting, such as those forced by the pandemic, lessened the coercive nature of community supervision, expressed by participants as an increase in autonomy. Third, establishment of institutional connections while incarcerated alleviated institutional avoidance resulting from hyper-surveillance, specifically in the domain of healthcare, which is critical when a public health crisis strikes. While the COVID-19 pandemic affected all, this article highlights how theories of surveillance inform unique aspects of the pandemic for formerly incarcerated individuals, while providing pathways forward for reducing the impact of surveillance.more » « less
- 
            Abstract Objectives Traditional police procedural justice theory argues that citizen perceptions of fair treatment by police officers increase police legitimacy, which leads to an increased likelihood of legal compliance. Recently, Nagin and Telep (2017) criticized these causal assumptions, arguing that prior literature has not definitively ruled out reverse causality—that is, legitimacy influences perceptions of fairness and/or compliance influences perceptions of both fairness and legitimacy. The goal of the present paper was to explore this critique using experimental and correlational methodologies within a longitudinal framework. Methods Adolescents completed a vignette-based experiment that manipulated two aspects of officer behavior linked to perceptions of fairness: voice and impartiality. After reading the vignette, participants rated the fairness and legitimacy of the officer within the situation. At three time points prior to the experiment (1, 17, and 31 months), participants completed surveys measuring their global perceptions of police legitimacy and self-reported delinquency. Data were analyzed to assess the extent to which global legitimacy and delinquency predicted responses to the vignette net of experimental manipulations and controls. Results Both experimental manipulations led to higher perceptions of situational procedural justice and officer legitimacy. Prior perceptions of police legitimacy did not predict judgments of situational procedural justice; however, in some cases, prior engagement in delinquency was negatively related to situational procedural justice. Prior perceptions of legitimacy were positively associated with situational perceptions of legitimacy regardless of experimental manipulations. Conclusions This study showed mixed support for the case of reverse causality among police procedural justice, legitimacy, and compliancemore » « less
- 
            COVID-19 is challenging many societal institutions, including our criminal justice systems. Some have proposed or enacted (e.g., the State of New Jersey) reductions in the jail and/or prison populations. We present a mathematical model to explore the epidemiologic impact of such interventions in jails and contrast them with the consequences of maintaining unaltered practices. We consider infection risk and likely in-custody deaths, and estimate how within-jail dynamics lead to spill-over risks, not only affecting incarcerated people but increasing exposure, infection, and death rates for both corrections officers and the broader community beyond the justice system. We show that, given a typical jail-community dynamic, operating in a business-as-usual way results in substantial, rapid, and ongoing loss of life. Our results are consistent with the hypothesis that large-scale reductions in arrest and speeding of releases are likely to save the lives of incarcerated people, jail staff, and the wider community.more » « less
- 
            Researchers and journalists have repeatedly shown that algorithms commonly used in domains such as credit, employment, healthcare, or criminal justice can have discriminatory effects. Some organizations have tried to mitigate these effects by simply removing sensitive features from an algorithm's inputs. In this paper, we explore the limits of this approach using a unique opportunity. In 2019, Facebook agreed to settle a lawsuit by removing certain sensitive features from inputs of an algorithm that identifies users similar to those provided by an advertiser for ad targeting, making both the modified and unmodified versions of the algorithm available to advertisers. We develop methodologies to measure biases along the lines of gender, age, and race in the audiences created by this modified algorithm, relative to the unmodified one. Our results provide experimental proof that merely removing demographic features from a real-world algorithmic system's inputs can fail to prevent biased outputs. As a result, organizations using algorithms to help mediate access to important life opportunities should consider other approaches to mitigating discriminatory effects.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    