As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations of various understandings of bias, ranging from neutral deviations from a standard to morally problematic instances of injustice due to prejudice, discrimination, and disparate treatment. This terminological confusion impedes efforts to address clear cases of discrimination. In this paper, we examine the promises and challenges of different approaches to disambiguating bias and designing for justice. While both approaches aid in understanding and addressing clear algorithmic harms, we argue that they also risk being leveraged in ways that ultimately deflect accountability from those building and deploying these systems. Applying this analysis to recent examples of generative AI, our argument highlights unseen dangers in current methods of evaluating algorithmic bias and points to ways to redirect approaches to addressing bias in generative AI at its early stages in ways that can more robustly meet the demands of justice. 
                        more » 
                        « less   
                    
                            
                            An Epistemic Lens on Algorithmic Fairness
                        
                    
    
            In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic fairness in order to address a wider range of harms not recognized by existing technical or legal definitions. Second, we argue that the epistemic lens helps to identify the epistemic goals of inquiries into algorithmic fairness. There are two distinct contexts within which we examine algorithmic harm: at times, we seek to understand and describe the world as it is, and, at other times, we seek to build a more just future. The epistemic lens can serve to direct our attention to the epistemic frameworks that shape our interpretations of the world as it is and the ways we envision possible futures. Clarity with respect to which epistemic context is relevant in a given inquiry can further help inform choices among the different ways of measuring and addressing algorithmic harms. We introduce this framework with the goal of initiating new research directions bridging philosophical, legal, and technical approaches to understanding and mitigating algorithmic harms. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2217680
- PAR ID:
- 10467043
- Publisher / Repository:
- Third ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization ( EAAMO ‘23)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Johnson, Kristin N.; Reyes, Carla L. (Ed.)Privacy regulation has traditionally been the remit of consumer protection, and privacy harm is cast as a contractual harm arising from the interpersonal exchanges between data subjects and data collectors. This frames surveillance of people by companies as primarily a consumer harm. In this article, we argue that the modern economy of personal data is better understood as an extension of the financial system. The data economy intersects with capital markets in ways that may increase systemic and systematic financial risks. We contribute a new regulatory approach to privacy harms: as a source of risk correlated across households, firms and the economy as a whole. We consider adapting tools from macroprudential regulations designed to mitigate financial crises to the market for personal data. We identify both promises and pitfalls to viewing individual privacy through the lens of the financial system.more » « less
- 
            Most social media platforms implement content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive approaches, e.g., bans and content removal. We consider an alternative justice framework, restorative justice, which aids victims in healing, supports offenders in repairing the harm, and engages community members in addressing the harm collectively. To assess the utility of restorative justice in addressing online harm, we interviewed 23 users from Overwatch gaming communities, including moderators, victims, and offenders; such communities are particularly susceptible to harm, with nearly three quarters of all online game players suffering from some form of online abuse. We study how the communities currently handle harm cases through the lens of restorative justice and examine their attitudes toward implementing restorative justice processes. Our analysis reveals that cultural, technical, and resource-related obstacles hinder implementation of restorative justice within the existing punitive framework despite online community needs and existing structures to support it. We discuss how current content moderation systems can embed restorative justice goals and practices and overcome these challenges.more » « less
- 
            In the past few years, there has been much work on incorporating fairness requirements into the design of algorithmic rankers, with contributions from the data management, algorithms, information retrieval, and recommender systems communities. In this tutorial, we give a systematic overview of this work, offering a broad perspective that connects formalizations and algorithmic approaches across subfields. During the first part of the tutorial, we present a classification framework for fairness-enhancing interventions, along which we will then relate the technical methods. This framework allows us to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs. Next, we discuss fairness in score-based ranking and in supervised learning-to-rank. We conclude with recommendations for practitioners, to help them select a fair ranking method based on the requirements of their specific application domain.more » « less
- 
            We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens from the legal, social science, and humanities literature which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including gender, race, sexual orientation, class, and disability. We show that our criteria behave sensibly for any subset of the set of protected attributes, and we prove economic, privacy, and generalization guarantees. Our theoretical results show that our criteria meaningfully operationalize AI fairness in terms of real-world harms, making the measurements interpretable in a manner analogous to differential privacy. We provide a simple learning algorithm using deterministic gradient methods, which respects our intersectional fairness criteria. The measurement of fairness becomes statistically challenging in the minibatch setting due to data sparsity, which increases rapidly in the number of protected attributes and in the values per protected attribute. To address this, we further develop a practical learning algorithm using stochastic gradient methods which incorporates stochastic estimation of the intersectional fairness criteria on minibatches to scale up to big data. Case studies on census data, the COMPAS criminal recidivism dataset, the HHP hospitalization data, and a loan application dataset from HMDA demonstrate the utility of our methods.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    