skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Award ID(s):
1763642
PAR ID:
10514468
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9781450394215
Page Range / eLocation ID:
1 to 17
Format(s):
Medium: X
Location:
Hamburg Germany
Sponsoring Org:
National Science Foundation
More Like this
  1. Hoadley, C; Wang, XC (Ed.)
    In this paper, we present a case study of designing AI-human partnerships in a realworld context of science classrooms. We designed a classroom environment where AI technologies, teachers and peers worked synergistically to support students’ writing in science. In addition to an NLP algorithm to automatically assess students’ essays, we also designed (i) feedback that was easier for students to understand; (ii) participatory structures in the classroom focusing on reflection, peer review and discussion, and (iii) scaffolding by teachers to help students understand the feedback. Our results showed that students improved their written explanations, after receiving feedback and engaging in reflection activities. Our case study illustrates that Augmented Intelligence (USDoE, 2023), in which the strengths of AI complement the strengths of teachers and peers, while also overcoming the limitations of each, can provide multiple forms of support to foster learning and teaching. 
    more » « less
  2. null (Ed.)
  3. AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making. 
    more » « less