The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.
Title: Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial
The field of machine learning fairness has developed metrics, methodologies, and data sets for experimenting with classification algorithms. However, equivalent research is lacking in the area of personalized recommender systems. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation, and metrics and methodologies in evaluating recommendation fairness. Participants will also gain hands-on experience with conducting fairness-aware recommendation experiments with the LibRec recommendation system using the librec-auto scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics. more »« less
Comparative experimentation is important for studying reproducibility in recommender systems. This is particularly true in areas without well-established methodologies, such as fairness-aware recommendation. In this paper, we describe fairness-aware enhancements to our recommender systems experimentation tool librec-auto. These enhancements include metrics for various classes of fairness definitions, extension of the experimental model to support result re-ranking and a library of associated re-ranking algorithms, and additional support for experiment automation and reporting. The associated demo will help attendees move quickly to configuring and running their own experiments with librec-auto.
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world.
Sonboli, Nasim; Eskandanian, Farzad; Burke, Robin; Liu, Weiwen; Mobasher, Bamshad(
, ACM Conference on User Modeling, Adaptation and Personalization (UMAP))
As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions.
Ma, Jing; Guo, Ruocheng; Zhang, Aidong; Li, Jundong(
, Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining)
Fairness-aware machine learning has attracted a surge of attention in many domains, such as online advertising, personalized recommendation, and social media analysis in web applications. Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age. Among many existing fairness notions, counterfactual fairness is a popular notion defined from a causal perspective. It measures the fairness of a predictor by comparing the prediction of each individual in the original world and that in the counterfactual worlds in which the value of the sensitive attribute is modified. A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data. However, in real-world scenarios, the underlying causal model is often unknown, and acquiring such human knowledge could be very difficult. In these scenarios, it is risky to directly trust the causal models obtained from information sources with unknown reliability and even causal discovery methods, as incorrect causal models can consequently bring biases to the predictor and lead to unfair predictions. In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE. Specifically, under certain general assumptions, CLAIRE effectively mitigates the biases from the sensitive attribute with a representation learning framework based on counterfactual data augmentation and an invariant penalty. Experiments conducted on both synthetic and real-world datasets validate the superiority of CLAIRE in both counterfactual fairness and prediction performance.
Real-time event detection and targeted decision making for emerging mission-critical applications require systems that extract and process relevant data from IoT sources in smart spaces. Oftentimes, this data is heterogeneous in size, relevance, and urgency, which creates a challenge when considering that different groups of stakeholders (e.g., first responders, medical staff, government officials, etc.) require such data to be delivered in a reliable and timely manner. Furthermore, in mission-critical settings, networks can become constrained due to lossy channels and failed components, which ultimately add to the complexity of the problem. In this article, we propose PrioDeX, a cross-layer middleware system that enables timely and reliable delivery of mission-critical data from IoT sources to relevant consumers through the prioritization of messages. It integrates parameters at the application, network, and middleware layers into a data exchange service that accurately estimates end-to-end performance metrics through a queueing analytical model. PrioDeX proposes novel algorithms that utilize the results of this analysis to tune data exchange configurations (event priorities and dropping policies), which is necessary for satisfying situational awareness requirements and resource constraints. PrioDeX leverages Software-Defined Networking (SDN) methodologies to enforce these configurations in the IoT network infrastructure. We evaluate our approach using both simulated and prototype-based experiments in a smart building fire response scenario. Our application-aware prioritization algorithm improves the value of exchanged information by 36% when compared with no prioritization; the addition of our network-aware drop rate policies improves this performance by 42% over priorities only and by 94% over no prioritization.
Burke, Robin, Mansoury, Masoud, and Sonboli, Nasim. Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial. Retrieved from https://par.nsf.gov/biblio/10179930. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency . Web. doi:10.1145/3351095.3375670.
Burke, Robin, Mansoury, Masoud, & Sonboli, Nasim. Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, (). Retrieved from https://par.nsf.gov/biblio/10179930. https://doi.org/10.1145/3351095.3375670
Burke, Robin, Mansoury, Masoud, and Sonboli, Nasim.
"Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (). Country unknown/Code not available. https://doi.org/10.1145/3351095.3375670.https://par.nsf.gov/biblio/10179930.
@article{osti_10179930,
place = {Country unknown/Code not available},
title = {Experimentation with fairness-aware recommendation using librec-auto: hands-on tutorial},
url = {https://par.nsf.gov/biblio/10179930},
DOI = {10.1145/3351095.3375670},
abstractNote = {The field of machine learning fairness has developed metrics, methodologies, and data sets for experimenting with classification algorithms. However, equivalent research is lacking in the area of personalized recommender systems. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation, and metrics and methodologies in evaluating recommendation fairness. Participants will also gain hands-on experience with conducting fairness-aware recommendation experiments with the LibRec recommendation system using the librec-auto scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics.},
journal = {Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
author = {Burke, Robin and Mansoury, Masoud and Sonboli, Nasim},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.