skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Combining uncertainty information with AI recommendations supports calibration with domain knowledge
The use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.  more » « less
Award ID(s):
2222801
PAR ID:
10539627
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Routledge Taylor & Francis Group
Date Published:
Journal Name:
Journal of Risk Research
Volume:
26
Issue:
10
ISSN:
1366-9877
Page Range / eLocation ID:
1137 to 1152
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Keathley, H.; Enos, J.; Parrish, M. (Ed.)
    The role of human-machine teams in society is increasing, as big data and computing power explode. One popular approach to AI is deep learning, which is useful for classification, feature identification, and predictive modeling. However, deep learning models often suffer from inadequate transparency and poor explainability. One aspect of human systems integration is the design of interfaces that support human decision-making. AI models have multiple types of uncertainty embedded, which may be difficult for users to understand. Humans that use these tools need to understand how much they should trust the AI. This study evaluates one simple approach for communicating uncertainty, a visual confidence bar ranging from 0-100%. We perform a human-subject online experiment using an existing image recognition deep learning model to test the effect of (1) providing single vs. multiple recommendations from the AI and (2) including uncertainty information. For each image, participants described the subject in an open textbox and rated their confidence in their answers. Performance was evaluated at four levels of accuracy ranging from the same as the image label to the correct category of the image. The results suggest that AI recommendations increase accuracy, even if the human and AI have different definitions of accuracy. In addition, providing multiple ranked recommendations, with or without the confidence bar, increases operator confidence and reduces perceived task difficulty. More research is needed to determine how people approach uncertain information from an AI system and develop effective visualizations for communicating uncertainty. 
    more » « less
  2. When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process. However, such incidental learning can only occur when people cognitively engage with the information they receive and process this information thoughtfully. How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning? To answer these questions, we conducted three experiments in which individuals were asked to make nutritional decisions and received simulated AI recommendations and explanations. In the first experiment, we found that when people were presented with both a recommendation and an explanation before making their choice, they made better decisions than they did when they received no such help, but they did not learn. In the second experiment, participants first made their own choice, and only then saw a recommendation and an explanation from AI; this condition also resulted in improved decisions, but no learning. However, in our third experiment, participants were presented with just an AI explanation but no recommendation and had to arrive at their own decision. This condition led to both more accurate decisions and learning gains. We hypothesize that learning gains in this condition were due to deeper engagement with explanations needed to arrive at the decisions. This work provides some of the most direct evidence to date that it may not be sufficient to provide people with AI-generated recommendations and explanations to ensure that people engage carefully with the AI-provided information. This work also presents one technique that enables incidental learning and, by implication, can help people process AI recommendations and explanations more carefully. 
    more » « less
  3. Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world. 
    more » « less
  4. The manuscript shares findings from a study engaging secondary mathematics preservice teachers using Artificial Intelligence (AI) chatbots to design mathematics lesson plans. Phenomenology was employed to investigate how six secondary preservice teachers used AI chatbots and navigated this new resource compared to their knowledge and experience in developing culturally responsive mathematics lesson plans that included mathematics and social justice goals. Our data analysis revealed that PSTs’ confidence in their Mathematical Content Knowledge and Pedagogical Content Knowledge allowed them to be critical of using AI-generated lesson plans. This finding contrasted with previous research on elementary education preservice teachers who gave away their decision-making agency to AI chatbots, especially about mathematics. The data suggests that the secondary PSTs had confidence in their Mathematical and Pedagogical Content Knowledge, making them more critical of the AI-generated lesson plans. The findings also indicate that AI tools can help teachers learn about Technological Pedagogical Knowledge (TPK). Overall, the data stressed the need to support PSTs in using AI chatbots critically. The implications of this study provide possible ways to help PSTs overcome their overconfidence in AI chatbots and imply that more professional development tools and programs must be constructed to help inservice teachers use AI tools. 
    more » « less
  5. College students may have limited access to produce and may lack confidence in preparing it, but cooking videos can show how to make healthy dishes. The Cognitive Theory of Multimedia Learning suggests that learning is enhanced when visual and auditory information is presented considering cognitive load (e.g., highlighting important concepts, eliminating extraneous information, and keeping the video brief and conversational). The purpose of this project was to pilot test a food label for produce grown at an urban university and assess whether student confidence in preparing produce improved after using the label and QR code to view a recipe video developed using principles from the Cognitive Theory of Multimedia Learning. The video showed a student preparing a salad with ingredients available on campus. Students indicated the label was helpful and reported greater perceived confidence in preparing lettuce after viewing the label and video (mean confidence of 5.60 ± 1.40 before vs. 6.14 ± 0.89 after, p = 0.016, n = 28). Keeping the video short and providing ingredients and amounts onscreen as text were cited as helpful. Thus, a brief cooking video and interactive label may improve confidence in preparing produce available on campus. Future work should determine whether the label impacts produce consumption and if it varies depending on the type of produce used. 
    more » « less