Combining uncertainty information with AI recommendations supports calibration with domain knowledge
The use of Artificial Intelligence (AI) decision support is increasing in
high-stakes contexts, such as healthcare, defense, and finance. Uncertainty
information may help users better leverage AI predictions, especially
when combined with their domain knowledge. We conducted a
human-subject experiment with an online sample to examine the effects
of presenting uncertainty information with AI recommendations. The
experimental stimuli and task, which included identifying plant and
animal images, are from an existing image recognition deep learning
model, a popular approach to AI. The uncertainty information was predicted
probabilities for whether each label was the true label. This information
was presented numerically and visually. In the study, we tested
the effect of AI recommendations in a within-subject comparison and
uncertainty information in a between-subject comparison. The results
suggest that AI recommendations increased both participants’ accuracy
and confidence. Further, providing uncertainty information significantly
increased accuracy but not confidence, suggesting that it may be effective
for reducing overconfidence. In this task, participants tended to have
higher domain knowledge for animals than plants based on a self-reported
measure of domain knowledge. Participants with more domain knowledge
were appropriately less confident when uncertainty information
was provided. This suggests that people use AI and uncertainty information
differently, such as an expert versus second opinion, depending
on their level of domain knowledge. These results suggest that if presented
appropriately, uncertainty information can potentially decrease
overconfidence that is induced by using AI recommendations.
more »
« less