skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 7, 2026

Title: Integrating Individual and Group Fairness for Recommender Systems through Social Choice
Fairness in recommender systems is a complex concept, involving multiple definitions, different parties for whom fairness is sought, and various scopes over which fairness might be measured. Re- searchers seeking fairness-aware systems have derived a variety of solutions, usually highly tailored to specific choices along each of these dimensions, and typically aimed at tackling a single fairness concern, i.e., a single definition for a specific stakeholder group and measurement scope. However, in practical contexts, there are a multiplicity of fairness concerns within a given recommendation application and solutions limited to a single dimension are therefore less useful. We explore a general solution to recommender system fairness using social choice methods to integrate multiple hetero- geneous definitions. In this paper, we extend group-fairness results from prior research to provider-side individual fairness, demon- strating in multiple datasets that both individual and group fairness objectives can be integrated and optimized jointly. We identify both synergies and tensions among different objectives with individ- ual fairness correlated with group fairness for some groups and anti-correlated with others.  more » « less
Award ID(s):
2107505
PAR ID:
10649924
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
ACM Conference on Recommender Systems (RecSys 2025)
Date Published:
Page Range / eLocation ID:
177 to 186
Format(s):
Medium: X
Location:
Prague
Sponsoring Org:
National Science Foundation
More Like this
  1. Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders that may have competing interests. Previous work in this area has often been limited by fixed, single-objective definitions of fairness, built into algorithms or optimization criteria that are applied to a single fairness dimension or, at most, applied identically across dimensions. These narrow conceptualizations limit the ability to adapt fairness-aware solutions to the wide range of stakeholder needs and fairness definitions that arise in practice. Our work approaches recommendation fairness from the standpoint of computational social choice, using a multi-agent framework. In this paper, we explore the properties of different social choice mechanisms and demonstrate the successful integration of multiple, heterogeneous fairness definitions across multiple data sets. 
    more » « less
  2. Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes (or some other metric) between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. To address the fundamental problem of fairness in the presence of multiple stakeholders, with different definitions of fairness, we propose the Social Choice for Recommendation Under Fairness – Dynamic (SCRUF-D) architecture, which formalizes multistakeholder fairness in recommender systems as a two-stage social choice problem. In particular, we express recommendation fairness as a combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. We demonstrate the ability of our framework to dynamically incorporate multiple fairness concerns using both real-world and synthetic datasets. 
    more » « less
  3. Recommender systems have a variety of stakeholders. Applying concepts of fairness in such systems requires attention to stakeholders’ complex and often-conflicting needs. Since fairness is socially constructed, there are numerous definitions, both in the social science and machine learning literatures. Still, it is rare for machine learning researchers to develop their metrics in close consideration of their social context. More often, standard definitions are adopted and assumed to be applicable across contexts and stakeholders. Our research starts with a recommendation context and then seeks to understand the breadth of the fairness considerations of associated stakeholders. In this paper, we report on the results of a semi-structured interview study with 23 employees who work for the Kiva microlending platform. We characterize the many different ways in which they enact and strive toward fairness for microlending recommendations in their own work, uncover the ways in which these different enactments of fairness are in tension with each other, and identify how stakeholders are differentially prioritized. Finally, we reflect on the implications of this study for future research and for the design of multistakeholder recommender systems. 
    more » « less
  4. null (Ed.)
    Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems. 
    more » « less
  5. Blum, A (Ed.)
    Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition. 
    more » « less