skip to main content


Title: It Depends on the Timing: The Ripple Effect of AI on Team Decision-Making
Whereas artificial intelligence (AI) is increasingly used to facilitate team decision-making, little is known about how the timing of AI assistance may impact team performance. The study investigates this question with an online experiment in which teams completed a new product development task with assistance from a chatbot. Information needed for making the decision was distributed among the team members. The chatbot shared information critical to the decision in either the first half or second half of team interaction. The results suggest that teams assisted by the chatbot in the first half of the decision-making task made better decisions than those assisted by the chatbot in the second half. Analysis of team member perceptions and interaction processes suggests that having a chatbot at the beginning of team interaction may have generated a ripple effect in the team that promoted information sharing among team members.  more » « less
Award ID(s):
2105169
NSF-PAR ID:
10437259
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Hawaii International Conference on System Sciences
ISSN:
0073-1129
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As AI increasingly assists teams in decision-making, the study examines how technology shapes team processes and performance. We conducted an online experiment of team decision-making assisted by chatbots and analyzed team interaction processes with computational methods. We found that teams assisted by a chatbot offering information in the first half of their decision-making process performed better than those assisted by the chatbot in the second half. The effect was explained by the variation in teams’ information-sharing process between the two chatbot conditions. When assisted by the chatbot in the first half of the decision-making task, teams showed higher levels of cognitive diversity (i.e., the difference in the information they shared) and information elaboration (i.e., exchange and integration of information). The findings demonstrate that if introduced early, AI can support team decision-making by acting as a catalyst to promote team information sharing. 
    more » « less
  2. As the integration of artificial intelligence (AI) into team decision-making continues to expand, it is both theoretically and practically pressing for researchers to understand the impact of the technology on team dynamics and performance. To investigate this relationship, we conducted an online experiment in which teams made decisions supported by chatbots and employed computational methods to analyze team interaction processes. Our results indicated that compared to those assisted by chatbots in later phases, teams receiving chatbot assistance during the initial phase of their decision-making process exhibited increased cognitive diversity (i.e., diversity in shared information) and information elaboration (i.e., exchange and integration of information). Ultimately, teams assisted by chatbots early on performed better. These results imply that introducing AI at the beginning of the process can enhance team decision-making by promoting effective information sharing among team members. 
    more » « less
  3. AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.

     
    more » « less
  4. Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human–AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions. 
    more » « less
  5. Proper calibration of human reliance on AI is fundamental to achieving complementary performance in AI-assisted human decision-making. Most previous works focused on assessing user reliance, and more broadly trust, retrospectively, through user perceptions and task-based measures. In this work, we explore the relationship between eye gaze and reliance under varying task difficulties and AI performance levels in a spatial reasoning task. Our results show a strong positive correlation between percent gaze duration on the AI suggestion and user AI task agreement, as well as user perceived reliance. Moreover, user agency is preserved particularly when the task is easy and when AI performance is low or inconsistent. Our results also reveal nuanced differences between reliance and trust. We discuss the potential of using eye gaze to gauge human reliance on AI in real-time, enabling adaptive AI assistance for optimal human-AI team performance. 
    more » « less