skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 1, 2026

Title: Fairness Traps & Checks
Fairness traps represent opportunities to fortify fairness by identifying guiding principles, raising awareness about assumptions being made, and inserting fairness checks into the process.  more » « less
Award ID(s):
2121930
PAR ID:
10594174
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Open Science Framework
Date Published:
Page Range / eLocation ID:
DOI 10.17605/OSF.IO/AR8WG
Subject(s) / Keyword(s):
Compensation higher education faculty
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fairness in recommender systems is a complex concept, involving multiple definitions, different parties for whom fairness is sought, and various scopes over which fairness might be measured. Re- searchers seeking fairness-aware systems have derived a variety of solutions, usually highly tailored to specific choices along each of these dimensions, and typically aimed at tackling a single fairness concern, i.e., a single definition for a specific stakeholder group and measurement scope. However, in practical contexts, there are a multiplicity of fairness concerns within a given recommendation application and solutions limited to a single dimension are therefore less useful. We explore a general solution to recommender system fairness using social choice methods to integrate multiple hetero- geneous definitions. In this paper, we extend group-fairness results from prior research to provider-side individual fairness, demon- strating in multiple datasets that both individual and group fairness objectives can be integrated and optimized jointly. We identify both synergies and tensions among different objectives with individ- ual fairness correlated with group fairness for some groups and anti-correlated with others. 
    more » « less
  2. Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes (or some other metric) between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. To address the fundamental problem of fairness in the presence of multiple stakeholders, with different definitions of fairness, we propose the Social Choice for Recommendation Under Fairness – Dynamic (SCRUF-D) architecture, which formalizes multistakeholder fairness in recommender systems as a two-stage social choice problem. In particular, we express recommendation fairness as a combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. We demonstrate the ability of our framework to dynamically incorporate multiple fairness concerns using both real-world and synthetic datasets. 
    more » « less
  3. Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have a distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness. 
    more » « less
  4. Niu, Gang (Ed.)
    Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness. 
    more » « less
  5. We present a fairness-aware model for predicting demand for new mobility systems. Our approach, called FairST, consists of 1D, 2D and 3D convolutions to learn the spatial-temporal dynamics of a mobility system, and fairness regularizers that guide the model to make equitable predictions. We propose two fairness metrics, region-based fairness gap (RFG) and individual-based fairness gap (IFG), that measure equity gaps between social groups for new mobility systems. Experimental results on two real-world datasets demonstrate the effectiveness of the proposed model: FairST not only reduces the fairness gap by more than 80%, but achieves better accuracy than state-of-the-art but fairness-oblivious methods including LSTMs, ConvLSTMs, and 3D CNN. 
    more » « less