skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Uncertainty and the Social Planner’s Problem: Why Sample Complexity Matters
Welfare measures overall utility across a population, whereas malfare measures overall disutility, and the social planner’s problem can be cast either as maximizing the former or minimizing the latter. We show novel bounds on the expectations and tail probabilities of estimators of welfare, malfare, and regret of per-group (dis)utility values, where estimates are made from a finite sample drawn from each group. In particular, we consider estimating these quantities for individual functions (e.g., allocations or classifiers) with standard probabilistic bounds, and optimizing and bounding generalization error over hypothesis classes (i.e., we quantify overfitting) using Rademacher averages. We then study algorithmic fairness through the lens of sample complexity, finding that because marginalized or minority groups are often understudied, and fewer data are therefore available, the social planner is more likely to overfit to these groups, thus even models that seem fair in training can be systematically biased against such groups. We argue that this effect can be mitigated by ensuring sufficient sample sizes for each group, and our sample complexity analysis characterizes these sample sizes. Motivated by these conclusions, we present progressive sampling algorithms to efficiently optimize various fairness objectives.  more » « less
Award ID(s):
1813444
PAR ID:
10345134
Author(s) / Creator(s):
Date Published:
Journal Name:
FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency
Page Range / eLocation ID:
2004 to 2015
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Statistical parity metrics have been widely studied and endorsed in the AI community as a means of achieving fairness, but they suffer from at least two weaknesses. They disregard the actual welfare consequences of decisions and may therefore fail to achieve the kind of fairness that is desired for disadvantaged groups. In addition, they are often incompatible with each other, and there is no convincing justification for selecting one rather than another. This paper explores whether a broader conception of social justice, based on optimizing a social welfare function (SWF), can be useful for assessing various definitions of parity. We focus on the well-known alpha fairness SWF, which has been defended by axiomatic and bargaining arguments over a period of 70 years. We analyze the optimal solution and show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity. In addition, we find that predictive rate parity is of limited usefulness. These results suggest that optimization theory can shed light on the intensely discussed question of how to achieve group fairness in AI. 
    more » « less
  2. Recent works extend classification group fairness measures to sequential decision processes such as reinforcement learning (RL) by measuring fairness as the difference in decisionmaker utility (e.g. accuracy) of each group. This approach suffers when decision-maker utility is not perfectly aligned with group utility, such as in repeat loan applications where a false positive (loan default) impacts the groups (applicants) and decision-maker (lender) by different magnitudes. Some works remedy this by measuring fairness in terms of group utility, typically referred to as their "qualification", but few works offer solutions that yield group qualification equality. Those that do are prone to violating the "no-harm" principle where one or more groups’ qualifications are lowered in order to achieve equality. In this work, we characterize this problem space as having three implicit objectives: maximizing decision-maker utility, maximizing group qualification, and minimizing the difference in qualification between groups. We provide a RL policy learning technique that optimizes for these objectives directly by constructing a multi-objective reward function that encodes these objectives as distinct reward signals. Under suitable parameterizations our approach is guaranteed to respect the "no-harm" principle. 
    more » « less
  3. We consider the allocation of scarce societal resources, where a central authority decides which individuals receive which resources under capacity or budget constraints. Several algorithmic fairness criteria have been proposed to guide these procedures, each quantifying a notion of local justice to ensure the allocation is aligned with the principles of the local institution making the allocation. For example, the efficient allocation maximizes overall social welfare, whereas the leximin assignment seeks to help the “neediest first.” Although the “price of fairness” (PoF) of leximin has been studied in prior work, we expand on these results by exploiting the structure inherent in real-world scenarios to provide tighter bounds. We further propose a novel criterion – which we term LoINC (leximin over individually normalized costs) – that maximizes a different but commonly used notion of local justice: prioritizing those benefiting the most from receiving the resources. We derive analogous PoF bounds for LoINC, showing that the price of LoINC is typically much lower than that of leximin. We provide extensive experimental results using both synthetic data and in a real-world setting considering the efficacy of different homelessness interventions. These results show that the empirical PoF tends to be substantially lower than worst-case bounds would imply and allow us to characterize situations where the price of LoINC fairness can be high. 
    more » « less
  4. Price discrimination strategies, which offer different prices to customers based on differences in their valuations, have become common practice. Although it allows sellers to increase their profits, it also raises several concerns in terms of fairness (e.g., by charging higher prices (or denying access) to protected minorities in case they have higher (or lower) valuations than the general population). This topic has received extensive attention from media, industry, and regulatory agencies. In this paper, we consider the problem of setting prices for different groups under fairness constraints. We first propose four definitions: fairness in price, demand, consumer surplus, and no-purchase valuation. We prove that satisfying more than one of these fairness constraints is impossible even under simple settings. We then analyze the pricing strategy of a profit-maximizing seller and the impact of imposing fairness on the seller’s profit, consumer surplus, and social welfare. Under a linear demand model, we find that imposing a small amount of price fairness increases social welfare, whereas too much price fairness may result in a lower welfare relative to imposing no fairness. On the other hand, imposing fairness in demand or consumer surplus always decreases social welfare. Finally, no-purchase valuation fairness always increases social welfare. We observe similar patterns under several extensions and for other common demand models numerically. Our results and insights provide a first step in understanding the impact of imposing fairness in the context of discriminatory pricing. This paper was accepted by Jayashankar Swaminathan, operations management. Funding: A. N. Elmachtoub was supported by the Division of Civil, Mechanical and Manufacturing Innovation [Grants 1763000 and 1944428]. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2022.4317 . 
    more » « less
  5. We initiate the study of matching roommates and rooms wherein the preferences of agents over other agents and rooms are complementary and represented by Leontief utilities. In this setting, 2n agents must be paired up and assigned to n rooms. Each agent has cardinal valuations over the rooms as well as compatibility values over all other agents. Under Leontief preferences, an agent’s utility for a matching is the minimum of the two values. We focus on the tradeoff between maximizing utilitarian social welfare and strategyproofness. Our main result shows that—in a stark contrast to the additive case— under binary Leontief utilities, there exist strategyproof mechanisms that maximize the social welfare. We further devise a strategyproof mechanism that implements such a welfare maximizing algorithm and is parameterized by the number of agents. Along the way, we highlight several possibility and impossibility results, and give upper bounds and lower bounds for welfare with or without strategyproofness. 
    more » « less