Fairness metrics have become a useful tool to measure how fair or unfair a machine learning system may be for its stakeholders. In the context of recommender systems, previous research has explored how various stakeholders experience algorithmic fairness or unfairness, but it is also important to capture these experiences in the design of fairness metrics. Therefore, we conducted four focus groups with providers (those whose items, content, or profiles are being recommended) of two different domains: content creators and dating app users. We explored how our participants experience unfairness on their associated platforms, and worked with them to co-design fairness goals, definitions, and metrics that might capture these experiences. This work represents an important step towards designing fairness metrics with the stakeholders who will be impacted by their operationalizations. We analyze the efficacy and challenges of enacting these metrics in practice and explore how future work might benefit from this methodology.
more »
« less
The Many Faces of Fairness: Exploring the Institutional Logics of Multistakeholder Microlending Recommendation
Recommender systems have a variety of stakeholders. Applying concepts of fairness in such systems requires attention to stakeholders’ complex and often-conflicting needs. Since fairness is socially constructed, there are numerous definitions, both in the social science and machine learning literatures. Still, it is rare for machine learning researchers to develop their metrics in close consideration of their social context. More often, standard definitions are adopted and assumed to be applicable across contexts and stakeholders. Our research starts with a recommendation context and then seeks to understand the breadth of the fairness considerations of associated stakeholders. In this paper, we report on the results of a semi-structured interview study with 23 employees who work for the Kiva microlending platform. We characterize the many different ways in which they enact and strive toward fairness for microlending recommendations in their own work, uncover the ways in which these different enactments of fairness are in tension with each other, and identify how stakeholders are differentially prioritized. Finally, we reflect on the implications of this study for future research and for the design of multistakeholder recommender systems.
more »
« less
- PAR ID:
- 10434420
- Date Published:
- Journal Name:
- FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
- Page Range / eLocation ID:
- 1652 to 1663
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders that may have competing interests. Previous work in this area has often been limited by fixed, single-objective definitions of fairness, built into algorithms or optimization criteria that are applied to a single fairness dimension or, at most, applied identically across dimensions. These narrow conceptualizations limit the ability to adapt fairness-aware solutions to the wide range of stakeholder needs and fairness definitions that arise in practice. Our work approaches recommendation fairness from the standpoint of computational social choice, using a multi-agent framework. In this paper, we explore the properties of different social choice mechanisms and demonstrate the successful integration of multiple, heterogeneous fairness definitions across multiple data sets.more » « less
-
null (Ed.)Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack an understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders’ nuanced viewpoints in real-world contexts. To address this gap, we propose a framework for eliciting stakeholders’ subjective fairness notions. Combining a user interface that allows stakeholders to examine the data and the algorithm’s predictions with an interview protocol to probe stakeholders’ thoughts while they are interacting with the interface, we can identify stakeholders’ fairness beliefs and principles. We conduct a user study to evaluate our framework in the setting of a child maltreatment predictive system. Our evaluations show that the framework allows stakeholders to comprehensively convey their fairness viewpoints. We also discuss how our results can inform the design of predictive systems.more » « less
-
Abstract Recommender systems are poised at the interface between stakeholders: for example, job applicants and employers in the case of recommendations of employment listings, or artists and listeners in the case of music recommendation. In such multisided platforms, recommender systems play a key role in enabling discovery of products and information at large scales. However, as they have become more and more pervasive in society, the equitable distribution of their benefits and harms have been increasingly under scrutiny, as is the case with machine learning generally. While recommender systems can exhibit many of the biases encountered in other machine learning settings, the intersection of personalization and multisidedness makes the question of fairness in recommender systems manifest itself quite differently. In this article, we discuss recent work in the area of multisided fairness in recommendation, starting with a brief introduction to core ideas in algorithmic fairness and multistakeholder recommendation. We describe techniques for measuring fairness and algorithmic approaches for enhancing fairness in recommendation outputs. We also discuss feedback and popularity effects that can lead to unfair recommendation outcomes. Finally, we introduce several promising directions for future research in this area.more » « less
-
Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes (or some other metric) between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. To address the fundamental problem of fairness in the presence of multiple stakeholders, with different definitions of fairness, we propose the Social Choice for Recommendation Under Fairness – Dynamic (SCRUF-D) architecture, which formalizes multistakeholder fairness in recommender systems as a two-stage social choice problem. In particular, we express recommendation fairness as a combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. We demonstrate the ability of our framework to dynamically incorporate multiple fairness concerns using both real-world and synthetic datasets.more » « less