skip to main content

This content will become publicly available on October 6, 2023

Title: Towards Substantive Conceptions of Algorithmic Fairness: Normative Guidance from Equal Opportunity Doctrines
In this work we use Equal Opportunity (EO) doctrines from political philosophy to make explicit the normative judgements embedded in different conceptions of algorithmic fairness. We contrast formal EO approaches that narrowly focus on fair contests at discrete decision points, with substantive EO doctrines that look at people’s fair life chances more holistically over the course of a lifetime. We use this taxonomy to provide a moral interpretation of the impossibility results as the incompatibility between different conceptions of a fair contest — foward-facing versus backward-facing — when people do not have fair life chances. We use this result to motivate substantive conceptions of algorithmic fairness and outline two plausible fair decision procedures based on the luck egalitarian doctrine of EO, and Rawls’s principle of fair equality of opportunity.
Authors:
; ;
Award ID(s):
1916505 1934464 1922658
Publication Date:
NSF-PAR ID:
10398897
Journal Name:
EAAMO '22: Equity and Access in Algorithms, Mechanisms, and Optimization
Page Range or eLocation-ID:
1 to 10
Sponsoring Org:
National Science Foundation
More Like this
  1. Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual’s group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in whichmore »only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low.« less
  2. As algorithmic decision making is increasingly deployed in every walk of life, many researchers have raised concerns about fairness-related bias from such algorithms. But there is little research on harnessing psychometric methods to uncover potential discriminatory bias inside decision-making algorithms. The main goal of this article is to propose a new framework for algorithmic fairness based on differential item functioning (DIF), which has been commonly used to measure item fairness in psychometrics. Our fairness notion, which we call differential algorithmic functioning (DAF), is defined based on three pieces of information: a decision variable, a “fair” variable, and a protected variable such as race or gender. Under the DAF framework, an algorithm can exhibit uniform DAF, nonuniform DAF, or neither (i.e., non-DAF). For detecting DAF, we provide modifications of well-established DIF methods: Mantel–Haenszel test, logistic regression, and residual-based DIF. We demonstrate our framework through a real dataset concerning decision-making algorithms for grade retention in K–12 education in the United States.

  3. Recent interest in codifying fairness in Automated Decision Systems (ADS) has resulted in a wide range of formulations of what it means for an algorithm to be “fair.” Most of these propositions are inspired by, but inadequately grounded in, scholarship from political philosophy. This comic aims to correct that deficit. We begin by setting up a working definition of an 'Automated Decision System' (ADS) and explaining 'bias' in outputs of an ADS. We then critically evaluate different definitions of fairness as Equality of Opportunity (EOP) by contrasting their conception in political philosophy (such as Rawls’s fair EOP and formal EOP) with the proposed codification in Fair-ML (such as statistical parity, equality of odds and accuracy) to provide a clearer lens with which to view existing results and to identify future research directions. We use this framing to reinterpret the impossibility results as the incompatibility between different EOP doctrines and demonstrate how political philosophy can provide normative guidance as to which notion of fairness is applicable in which context. We conclude by highlighting justice considerations that the fair-ML literature currently overlooks or underemphasizes, such as Rawls's broader theory of justice, which supplements his EOP principle with a principle guaranteeing equal rightsmore »and liberties to all citizens in a free and democratic society.« less
  4. The use of automated data-driven tools for decision-making has gained popularity in recent years. At the same time, the reported cases of algorithmic bias and discrimination increase as well, which in turn lead to an extensive study of algorithmic fairness. Numerous notions of fairness have been proposed, designed to capture different scenarios. These measures typically refer to a "protected group" in the data, defined using values of some sensitive attributes. Confirming whether a fairness definition holds for a given group is a simple task, but detecting groups that are treated unfairly by the algorithm may be computationally prohibitive as the number of possible groups is combinatorial. We present a method for detecting such groups efficiently for various fairness definitions. Our solution is implemented in a system called DENOUNCER, an interactive system that allows users to explore different fairness measures of a (trained) classifier for a given test data. We propose to demonstrate the usefulness of DENOUNCER using real-life data and illustrate the effectiveness of our method.
  5. Algorithmic fairness research has traditionally been linked to the disciplines of philosophy, ethics, and economics, where notions of fairness are prescriptive and seek objectivity. Increasingly, however, scholars are turning to the study of what different people perceive to be fair, and how these perceptions can or should help to shape the design of machine learning, particularly in the policy realm. The present work experimentally explores five novel research questions at the intersection of the "Who," "What," and "How" of fairness perceptions. Specifically, we present the results of a multi-factor conjoint analysis study that quantifies the effects of the specific context in which a question is asked, the framing of the given question, and who is answering it. Our results broadly suggest that the "Who" and "What," at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.