Algorithmic decision-making systems are increasingly used throughout the public and private sectors to make important decisions or assist humans in making these decisions with real social consequences. While there has been substantial research in recent years to build fair decision-making algorithms, there has been less research seeking to understand the factors that affect people's perceptions of fairness in these systems, which we argue is also important for their broader acceptance. In this research, we conduct an online experiment to better understand perceptions of fairness, focusing on three sets of factors: algorithm outcomes, algorithm development and deployment procedures, and individual differences. We find that people rate the algorithm as more fair when the algorithm predicts in their favor, even surpassing the negative effects of describing algorithms that are very biased against particular demographic groups. We find that this effect is moderated by several variables, including participants' education level, gender, and several aspects of the development procedure. Our findings suggest that systems that evaluate algorithmic fairness through users' feedback must consider the possibility of "outcome favorability" bias. 
                        more » 
                        « less   
                    
                            
                            Learning Social Fairness Preferences from Non-Expert Stakeholder Opinions in Kidney Placement
                        
                    
    
            Modern kidney placement incorporates several intelligent recommendation systems which exhibit social discrimination due to biases inherited from training data. Although initial attempts were made in the literature to study algorithmic fairness in kidney placement, these methods replace true outcomes with surgeons’ decisions due to the long delays involved in recording such outcomes reliably. However, the replacement of true outcomes with surgeons’ decisions disregards expert stakeholders’ biases as well as social opinions of other stakeholders who do not possess medical expertise. This paper alleviates the latter concern and designs a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts a kidney’s acceptance rate in a given kidneymatch pair. The survey is launched on Prolific, a crowdsourcing platform, and public opinions are collected from 85 anonymous crowd participants. A novel social fairness preference learning algorithm is proposed based on minimizing social feedback regret computed using a novel logit-based fairness feedback model. The proposed model and learning algorithm are both validated using simulation experiments as well as Prolific data. Public preferences towards group fairness notions in the context of kidney placement have been estimated and discussed in detail. The specific ARP tested in the Prolific survey has been deemed fair by the participants. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2222801
- PAR ID:
- 10539630
- Publisher / Repository:
- Proceedings of Machine Learning Research
- Date Published:
- Volume:
- 248
- ISSN:
- 2640-3498
- Page Range / eLocation ID:
- 683-695
- Format(s):
- Medium: X
- Location:
- New York City, NY
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Purpose: AI models for kidney transplant acceptance must be rigorously evaluated for bias to ensure equitable healthcare access. This study investigates demographic and clinical biases in the Final Acceptance Model (FAM), a donor-recipient matching deep learning model that complements surgeons’ decision-making process in predicting whether to accept available kidneys for their patients with end of stage renal disorder. Methods: AI models for kidney transplant acceptance must be rigorously evaluated for bias to ensure equitable healthcare access. This study investigates demographic and clinical biases in the Final Acceptance Model (FAM), a donor-recipient matching deep learning model that complements surgeons’ decision-making process in predicting whether to accept available kidneys for their patients with end of stage renal disorder. Results: There is no significant racial bias in the model’s predictions (p=1.0), indicating consistent outcome across all racial combinations between donors and recipients. Gender-related effects as shown in Figure 1, while statistically significant (p=0.008), showed minimal practical impact with mean differences below 1% in prediction probabilities. Significant difference Clinical factors involving diabetes and hypertension showed significant difference (p=4.21e-19). The combined presence of diabetes and hypertension in donors showed the largest effect on predictions (mean difference up to -0.0173, p<0.05), followed by diabetes-only conditions in donors (mean difference up to -0.0166, p<0.05). These variations in clinical factor predictions showed bias against groups with comorbidities. Conclusions: The biases observed in the model highlight the need to improve the algorithm to ensure absolute fairness in prediction.more » « less
- 
            Algorithms are used to aid decision-making for a wide range of public policy decisions. Yet, the details of the algorithmic processes and how to interact with their systems are often inadequately communicated to stakeholders, leaving them frustrated and distrusting of the outcomes of the decisions. Transparency and accountability are critical prerequisites for building trust in the results of decisions and guaranteeing fair and equitable outcomes. Unfortunately, organizations and agencies do not have strong incentives to explain and clarify their decision processes; however, stakeholders are not powerless and can strategically combine their efforts to push for more transparency. In this paper, I discuss the results and lessons learned from such an effort: a parent-led crowdsourcing campaign to increase transparency in the New York City school admission process. NYC famously uses a deferred-acceptance matching algorithm to assign students to schools, but families are given very little, and often wrong, information on the mechanisms of the system in which they have to participate. Furthermore, the odds of matching to specific schools depend on a complex set of priority rules and tie-breaking random (lottery) numbers, whose impact on the outcome is not made clear to students and their families, resulting in many “wasted choices” on students’ ranked lists and a high rate of unmatched students. Using the results of a crowdsourced survey of school application results, I was able to explain how random tie-breakers factored in the admission, adding clarity and transparency to the process. The results highlighted several issues and inefficiencies in the match and made the case for the need for more accountability and verification in the system.more » « less
- 
            While we typically focus on data visualization as a tool for facilitating cognitive tasks (e.g. learning facts, making decisions), we know relatively little about their second-order impacts on our opinions, attitudes, and values. For example, could design or framing choices interact with viewers' social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers' attitudes to match the divided opinions shown in the visualization. We conducted three experiments examining visualization framing in the context of social conformity and polarization. Crowdworkers viewed charts showing simulated polling results for public policy proposals. We varied framing (aggregating data as non-partisan “All US Adults,” or partisan “Democrat” / “Republican”) and the visualized groups' support levels. Participants then reported their own support for each policy. We found that participants' attitudes biased significantly toward the group attitudes shown in the stimuli and this can increase inter-party attitude divergence. These results demonstrate that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further.more » « less
- 
            We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances. On each round, k instances arrive and receive classification outcomes according to a randomized policy deployed by the learner, whose goal is to maximize accuracy while deploying individually fair policies. We first extend the framework of Bechavod et al. (2020), which relies on the existence of a human fairness auditor for detecting fairness violations, to instead incorporate feedback from dynamically-selected panels of multiple, possibly inconsistent, auditors. We then construct an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual combinatorial semi-bandit problem (Cesa-Bianchi & Lugosi, 2009, György et al., 2007). Finally, we show how to leverage the guarantees of two algorithms in the contextual combinatorial semi-bandit setting: Exp2 (Bubeck et al., 2012) and the oracle-efficient Context-Semi-Bandit-FTPL (Syrgkanis et al., 2016), to provide multi-criteria no regret guarantees simultaneously for accuracy and fairness. Our results eliminate two potential sources of bias from prior work: the "hidden outcomes" that are not available to an algorithm operating in the full information setting, and human biases that might be present in any single human auditor, but can be mitigated by selecting a well chosen panel.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    