As AI increasingly assists teams in decision-making, the study examines how technology shapes team processes and performance. We conducted an online experiment of team decision-making assisted by chatbots and analyzed team interaction processes with computational methods. We found that teams assisted by a chatbot offering information in the first half of their decision-making process performed better than those assisted by the chatbot in the second half. The effect was explained by the variation in teams’ information-sharing process between the two chatbot conditions. When assisted by the chatbot in the first half of the decision-making task, teams showed higher levels of cognitive diversity (i.e., the difference in the information they shared) and information elaboration (i.e., exchange and integration of information). The findings demonstrate that if introduced early, AI can support team decision-making by acting as a catalyst to promote team information sharing. 
                        more » 
                        « less   
                    
                            
                            It Depends on the Timing: The Ripple Effect of AI on Team Decision-Making
                        
                    
    
            Whereas artificial intelligence (AI) is increasingly used to facilitate team decision-making, little is known about how the timing of AI assistance may impact team performance. The study investigates this question with an online experiment in which teams completed a new product development task with assistance from a chatbot. Information needed for making the decision was distributed among the team members. The chatbot shared information critical to the decision in either the first half or second half of team interaction. The results suggest that teams assisted by the chatbot in the first half of the decision-making task made better decisions than those assisted by the chatbot in the second half. Analysis of team member perceptions and interaction processes suggests that having a chatbot at the beginning of team interaction may have generated a ripple effect in the team that promoted information sharing among team members. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2105169
- PAR ID:
- 10437259
- Date Published:
- Journal Name:
- Proceedings of the Hawaii International Conference on System Sciences
- ISSN:
- 0073-1129
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            As the integration of artificial intelligence (AI) into team decision-making continues to expand, it is both theoretically and practically pressing for researchers to understand the impact of the technology on team dynamics and performance. To investigate this relationship, we conducted an online experiment in which teams made decisions supported by chatbots and employed computational methods to analyze team interaction processes. Our results indicated that compared to those assisted by chatbots in later phases, teams receiving chatbot assistance during the initial phase of their decision-making process exhibited increased cognitive diversity (i.e., diversity in shared information) and information elaboration (i.e., exchange and integration of information). Ultimately, teams assisted by chatbots early on performed better. These results imply that introducing AI at the beginning of the process can enhance team decision-making by promoting effective information sharing among team members.more » « less
- 
            Despite the growing interest in human-AI decision making, experimental studies with domain experts remain rare, largely due to the complexity of working with domain experts and the challenges in setting up realistic experiments. In this work, we conduct an in-depth collaboration with radiologists in prostate cancer diagnosis based on MRI images. Building on existing tools for teaching prostate cancer diagnosis, we develop an interface and conduct two experiments to study how AI assistance and performance feedback shape the decision making of domain experts. In Study 1, clinicians were asked to provide an initial diagnosis (human), then view the AI's prediction, and subsequently finalize their decision (human-AI team). In Study 2 (after a memory wash-out period), the same participants first received aggregated performance statistics from Study 1, specifically their own performance, the AI's performance, and their human-AI team performance, and then directly viewed the AI's prediction before making their diagnosis (i.e., no independent initial diagnosis). These two workflows represent realistic ways that clinical AI tools might be used in practice, where the second study simulates a scenario where doctors can adjust their reliance and trust on AI based on prior performance feedback. Our findings show that, while human-AI teams consistently outperform humans alone, they still underperform the AI due to under-reliance, similar to prior studies with crowdworkers. Providing clinicians with performance feedback did not significantly improve the performance of human-AI teams, although showing AI decisions in advance nudges people to follow AI more. Meanwhile, we observe that the ensemble of human-AI teams can outperform AI alone, suggesting promising directions for human-AI collaboration.more » « less
- 
            AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.more » « less
- 
            Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human–AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    