Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available May 2, 2026
- 
            Defining fairness in algorithmic contexts is challenging, particularly when adapting to new domains. Our research introduces a novel method for learning and applying group fairness preferences across different classification domains, without the need for manual fine-tuning. Utilizing concepts from inverse reinforcement learning (IRL), our approach enables the extraction and application of fairness preferences from human experts or established algorithms. We propose the first technique for using IRL to recover and adapt group fairness preferences to new domains, offering a low-touch solution for implementing fair classifiers in settings where expert-established fairness tradeoffs are not yet defined.more » « less
- 
            Recent works extend classification group fairness measures to sequential decision processes such as reinforcement learning (RL) by measuring fairness as the difference in decisionmaker utility (e.g. accuracy) of each group. This approach suffers when decision-maker utility is not perfectly aligned with group utility, such as in repeat loan applications where a false positive (loan default) impacts the groups (applicants) and decision-maker (lender) by different magnitudes. Some works remedy this by measuring fairness in terms of group utility, typically referred to as their "qualification", but few works offer solutions that yield group qualification equality. Those that do are prone to violating the "no-harm" principle where one or more groups’ qualifications are lowered in order to achieve equality. In this work, we characterize this problem space as having three implicit objectives: maximizing decision-maker utility, maximizing group qualification, and minimizing the difference in qualification between groups. We provide a RL policy learning technique that optimizes for these objectives directly by constructing a multi-objective reward function that encodes these objectives as distinct reward signals. Under suitable parameterizations our approach is guaranteed to respect the "no-harm" principle.more » « less
- 
            Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems. Prior work has translated these definitions to other machine learning environments, such as unsupervised learning and reinforcement learning, by implementing their closest mathematical equivalent. As a result, there are numerous bespoke interpretations of these definitions. This work aims to unify the shared aspects of each of these bespoke definitions, and to this end we provide a group fairness framework that generalizes beyond just classification problems. We leverage two fairness principles that enable this generalization. First, our framework measures outcomes in terms of utilities, rather than predictions, and does so for both the decision-maker and the individual. Second, our framework can consider counterfactual outcomes, rather than just observed outcomes, thus preventing loopholes where fairness criteria are satisfied through self-fulfilling prophecies. We provide concrete examples of how our utility fairness framework avoids these assumptions and thus naturally integrates with classification, clustering, and reinforcement learning fairness problems. We also show that many of the bespoke interpretations of Demographic Parity and Equal Opportunity fit nicely as special cases of our framework.more » « less
- 
            The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness measure(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead recast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.more » « less
- 
            Auditing for fairness often requires relying on a secondary source, e.g., Census data, to inform about protected attributes. To avoid making assumptions about an overarching model that ties such information to the primary data source, a recent line of work has suggested finding the entire range of possible fairness valuations consistent with both sources. Though attractive, the current form of this methodology relies on rigid analytical expressions and lacks the ability to handle continuous decisions, e.g., metrics of urban services. We show that, in such settings, directly adapting these expressions can lead to loose and even vacuous results, particularly on just how fair the audited decisions may be. If used, the audit would be perceived more optimistically than it ought to be. We propose a linear programming formulation to handle continuous decisions, by finding the empirical fairness range when statistical parity is measured through the Kolmogorov-Smirnov distance. The size of this problem is linear in the number of data points and efficiently solvable. We analyze this approach and give finite-sample guarantees to the resulting fairness valuation. We then apply it to synthetic data and to 311 Chicago City Services data, and demonstrate its ability to reveal small but detectable bounds on fairness.more » « less
- 
            While conventional ranking systems focus solely on maximizing the utility of the ranked items to users, fairness-aware ranking systems additionally try to balance the exposure based on different protected attributes such as gender or race. To achieve this type of group fairness for ranking, we derive a new ranking system from the first principles of distributional robustness. We formulate a minimax game between a player choosing a distribution over rankings to maximize utility while satisfying fairness constraints against an adversary seeking to minimize utility while matching statistics of the training data. Rather than maximizing utility and fairness for the specific training data, this approach efficiently produces robust utility and fairness for a much broader family of distributions of rankings that include the training data. We show that our approach provides better utility for highly fair rankings than existing baseline methods.more » « less
- 
            While conventional ranking systems focus solely on maximizing the utility of the ranked items to users, fairness-aware ranking systems additionally try to balance the exposure based on different protected attributes such as gender or race. To achieve this type of group fairness for ranking, we derive a new ranking system from the first principles of distributional robustness. We formulate a minimax game between a player choosing a distribution over rankings to maximize utility while satisfying fairness constraints against an adversary seeking to minimize utility while matching statistics of the training data. Rather than maximizing utility and fairness for the specific training data, this approach efficiently produces robust utility and fairness for a much broader family of distributions of rankings that include the training data. We show that our approach provides better utility for highly fair rankings than existing baseline methods.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available