skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness
We turn the definition of individual fairness on its head - rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness. This can facilitate the discussion on the fairness of a model, addressing the issue that it may be difficult to specify a priori a suitable metric. Our contributions are twofold:First, we introduce the definition of a minimal metric and characterize the behavior of models in terms of minimal metrics. Second, for more complicated models, we apply the mechanism of randomized smoothing from adversarial robustness to make them individually fair under a given weighted Lp metric. Our experiments show that adapting the minimal metrics of linear models to more complicated neural networks can lead to meaningful and interpretable fairness guarantees at little cost to utility.  more » « less
Award ID(s):
1704845
PAR ID:
10238795
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Twenty-Ninth International Joint Conference on Artificial Intelligence
Page Range / eLocation ID:
437 to 443
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Roth, A (Ed.)
    It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of “true” fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every “truly fair” classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness. Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle’s conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair. Our results have implications for interpretablity – a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be“unfair” or illegitimately derived. 
    more » « less
  2. Continual Federated Learning (CFL) is a distributed machine learning technique that enables multiple clients to collaboratively train a shared model without sharing their data, while also adapting to new classes without forgetting previously learned ones. This dynamic, adaptive learning process parallels the concept of founda- tion models in FL, where large, pre-trained models are fine-tuned in a decentralized, federated setting. While foundation models in FL leverage pre-trained knowledge as a starting point, CFL continu- ously updates the shared model as new tasks and data distributions emerge, requiring ongoing adaptation. Currently, there are limited evaluation models and metrics in measuring fairness in CFL, and ensuring fairness over time can be challenging as the system evolves. To address this challenge, this article explores temporal fairness in CFL, examining how the fairness of the model can be influenced by the selection and participation of clients over time. Based on individual fairness, we introduce a novel fairness metric that captures temporal aspects of client behavior and evaluates different client selection strategies for their impact on promoting fairness. 
    more » « less
  3. Urban population growth has significantly complicated the management of mobility systems, demanding innovative tools for planning. Generative Crowd-Flow  (GCF) models, which leverage machine learning to simulate urban movement patterns, offer a promising solution but lack sufficient evaluation of their fairness–a critical factor for equitable urban planning. We present an approach to measure and benchmark the fairness of GCF  models by developing a first-of-its-kind set of fairness metrics specifically tailored for this purpose. Using observed flow data, we employ a stochastic biased sampling approach to generate multiple permutations of Origin-Destination  datasets, each demonstrating intentional bias. Our proposed framework allows for the comparison of multiple GCF  models to evaluate how models introduce bias in outputs. Preliminary results indicate a tradeoff between model accuracy and fairness, underscoring the need for careful consideration in the deployment of these technologies. To this end, this study bridges the gap between human mobility literature and fairness in machine learning, with potential to help urban planners and policymakers leverage GCF  models for more equitable urban infrastructure development. 
    more » « less
  4. There has been increasing concern within the machine learning community and beyond that Artificial Intelligence (AI) faces a bias and discrimination crisis which needs AI fairness with urgency. As many have begun to work on this problem, most existing work depends on the availability of class label for the given fairness definition and algorithm which may not align with real-world usage. In this work, we study an AI fairness problem that stems from the gap between the design of a fair model in the lab and its deployment in the real-world. Specifically, we consider defining and mitigating individual unfairness amidst censorship, where the availability of class label is not always guaranteed due to censorship, which is broadly applicable in a diversity of real-world socially sensitive applications. We show that our method is able to quantify and mitigate individual unfairness in the presence of censorship across three benchmark tasks, which provides the first known results on individual fairness guarantee in analysis of censored data. 
    more » « less
  5. As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions. 
    more » « less