Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Purpose: The equitable distribution of donor kidneys is crucial to maximizing transplant success rates and addressing disparities in healthcare data. This study examines potential gender bias in the Deceased Donor Organ Allocation Model (DDOA) by using machine learning and AI to analyze its impact on kidney discard decisions to ensure fairness in accordance with medical ethics. Methods: The study employs the Deceased Donor Organ Allocation Model (DDOA) model (https://ddoa.mst.hekademeia.org/#/kidney) to predict the discard probability of deceased donor kidneys using donor characteristic from the OPTN Deceased Donor Dataset (2016-2023). Using the SRTR SAF dictionary, the dataset consists of 18,029 donor records, where gender was assessed for its effect on discard probability. ANOVA and t-test determines whether there is a statistically significant difference between the discard percentages for female and male donors by changing the donor gender data alone. If the p-value obtained from the t-test is less than the significance level (typically 0.05), we reject the null hypothesis and conclude that there is a significant difference. Otherwise, we fail to reject the null hypothesis. Results: Figure 1 visualizes the differences in discard percentages between female and male donor kidneys, with an unbiased allocation system expected to show no difference (i.e., a value of zero). To assess the presence of gender bias, statistical analyses, including t-tests and ANOVA were performed. The t-test comparing female and male kidney discard rates yielded a t-statistic of 29.690228, with a p-value of 3.586956e-189 < 0.05 significance threshold. This result leads to the rejection of the null hypothesis, indicating a significant difference was found between the mean when altering only the donor gender attribute in the DDOA model making it play a significant role in discard decisions. Conclusions: The study highlights that a significant difference was found between the mean by altering only the donor gender attribute, contributing to kidney discard rates in the DDOA model. These findings reinforce the need for greater transparency in organ allocation models and a reconsideration of the demographic criteria used in the evaluation process. Future research should refine algorithms to minimize biases in organ allocation and investigate kidney discard disparities in transplantation.more » « lessFree, publicly-accessible full text available August 1, 2026
- 
            Purpose: AI models for kidney transplant acceptance must be rigorously evaluated for bias to ensure equitable healthcare access. This study investigates demographic and clinical biases in the Final Acceptance Model (FAM), a donor-recipient matching deep learning model that complements surgeons’ decision-making process in predicting whether to accept available kidneys for their patients with end of stage renal disorder. Methods: AI models for kidney transplant acceptance must be rigorously evaluated for bias to ensure equitable healthcare access. This study investigates demographic and clinical biases in the Final Acceptance Model (FAM), a donor-recipient matching deep learning model that complements surgeons’ decision-making process in predicting whether to accept available kidneys for their patients with end of stage renal disorder. Results: There is no significant racial bias in the model’s predictions (p=1.0), indicating consistent outcome across all racial combinations between donors and recipients. Gender-related effects as shown in Figure 1, while statistically significant (p=0.008), showed minimal practical impact with mean differences below 1% in prediction probabilities. Significant difference Clinical factors involving diabetes and hypertension showed significant difference (p=4.21e-19). The combined presence of diabetes and hypertension in donors showed the largest effect on predictions (mean difference up to -0.0173, p<0.05), followed by diabetes-only conditions in donors (mean difference up to -0.0166, p<0.05). These variations in clinical factor predictions showed bias against groups with comorbidities. Conclusions: The biases observed in the model highlight the need to improve the algorithm to ensure absolute fairness in prediction.more » « lessFree, publicly-accessible full text available August 1, 2026
- 
            (TSFAM) model, an adaptive human-AI teaming framework designed to enhance hard-to-place kidney acceptance decision-making by integrating transplant surgeons’ individualized expertise with advanced AI analytics (Figure 1). Methods: TSFAM is an innovative solution for complex issues in kidney transplant decision-making support. It employs fuzzy associative memory to capture and codify unique decision-making rules of transplant surgeons. Using the Deceased Donor Organ Assessment (DDOA) and Final Acceptance AI models designed to evaluate hard-to-place kidneys, TSFAM integrates fuzzy logic with deep learning techniques to manage inherent uncertainties in donor organ assessments. Surgeon-specifi c ontologies and membership functions are extracted through interviews. Similar to how a pain scale is used for understanding patients, an ontology ambiguity scale is used to develop surgeon rules (Figure 2). Fuzzy logic captures ambiguity and enables the model to adapt to evolving clinical, environmental, and policy conditions. The structured incorporation of human expertise ensures decision support remains closely aligned with local clinical practices and global best evidence. Results: This novel framework incorporates human expertise into AI decisionmaking tools to support donor organ acceptance in transplantation. Integrating surgeon-defi ned criteria into a robust decision-support tool enhances accuracy and transparency of organ allocation decision-making support. TSFAM bridges the gap between data-driven models and nuanced judgment required in complex clinical scenarios, fostering trust and promoting responsible AI adoption. Conclusions: TSFAM fuses deep learning analytics with subtleties of human expertise for a promising pathway to improve decision-making support in transplant surgery. The framework enhances clinical assessment and sets a precedent for future systems prioritizing human-AI collaboration. Prospective studies will focus on clinical implementation with dynamic interfaces for a more patient-centered, evidencebased model in organ transplantation. The intent is for this approach to be adaptable to individual case scenarios and the diverse needs of key transplant team membersmore » « lessFree, publicly-accessible full text available August 1, 2026
- 
            Transplantation provides patients suffering from end-stage kidney disease a better quality of life and long-term survival. However, over 20% of deceased donor kidneys are not utilized and never transplanted. While this is sometimes medically appropriate, this also reflects missed opportunities. We are designing Artificial Intelligence decision support for the kidney offer process to support both demand at the transplant center and supply at the organ procurement organization. This includes (1) developing deep learning models, (2) evaluating the effect of explainable interfaces, (3) improving fairness in the model output, (4) identifying factors that influence adoption decisions, and (5) conducting a randomized control trial using an ecologically valid and realistic simulation platform for behavioral experiments, to estimate the impact on kidney utilization.more » « lessFree, publicly-accessible full text available June 6, 2026
- 
            Salado, A; Valerdi, R; Steiner, R; Head, L (Ed.)
- 
            Modern kidney placement incorporates several intelligent recommendation systems which exhibit social discrimination due to biases inherited from training data. Although initial attempts were made in the literature to study algorithmic fairness in kidney placement, these methods replace true outcomes with surgeons’ decisions due to the long delays involved in recording such outcomes reliably. However, the replacement of true outcomes with surgeons’ decisions disregards expert stakeholders’ biases as well as social opinions of other stakeholders who do not possess medical expertise. This paper alleviates the latter concern and designs a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts a kidney’s acceptance rate in a given kidneymatch pair. The survey is launched on Prolific, a crowdsourcing platform, and public opinions are collected from 85 anonymous crowd participants. A novel social fairness preference learning algorithm is proposed based on minimizing social feedback regret computed using a novel logit-based fairness feedback model. The proposed model and learning algorithm are both validated using simulation experiments as well as Prolific data. Public preferences towards group fairness notions in the context of kidney placement have been estimated and discussed in detail. The specific ARP tested in the Prolific survey has been deemed fair by the participants.more » « less
- 
            The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems – 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.more » « less
- 
            Combining uncertainty information with AI recommendations supports calibration with domain knowledgeThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available