As AI increasingly assists teams in decision-making, the study examines how technology shapes team processes and performance. We conducted an online experiment of team decision-making assisted by chatbots and analyzed team interaction processes with computational methods. We found that teams assisted by a chatbot offering information in the first half of their decision-making process performed better than those assisted by the chatbot in the second half. The effect was explained by the variation in teams’ information-sharing process between the two chatbot conditions. When assisted by the chatbot in the first half of the decision-making task, teams showed higher levels of cognitive diversity (i.e., the difference in the information they shared) and information elaboration (i.e., exchange and integration of information). The findings demonstrate that if introduced early, AI can support team decision-making by acting as a catalyst to promote team information sharing. 
                        more » 
                        « less   
                    
                            
                            Chatbot Catalysts: Improving Team Decision-Making Through Cognitive Diversity and Information Elaboration
                        
                    
    
            As the integration of artificial intelligence (AI) into team decision-making continues to expand, it is both theoretically and practically pressing for researchers to understand the impact of the technology on team dynamics and performance. To investigate this relationship, we conducted an online experiment in which teams made decisions supported by chatbots and employed computational methods to analyze team interaction processes. Our results indicated that compared to those assisted by chatbots in later phases, teams receiving chatbot assistance during the initial phase of their decision-making process exhibited increased cognitive diversity (i.e., diversity in shared information) and information elaboration (i.e., exchange and integration of information). Ultimately, teams assisted by chatbots early on performed better. These results imply that introducing AI at the beginning of the process can enhance team decision-making by promoting effective information sharing among team members. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2105169
- PAR ID:
- 10545067
- Publisher / Repository:
- ICIS 2023 Proceedings
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Whereas artificial intelligence (AI) is increasingly used to facilitate team decision-making, little is known about how the timing of AI assistance may impact team performance. The study investigates this question with an online experiment in which teams completed a new product development task with assistance from a chatbot. Information needed for making the decision was distributed among the team members. The chatbot shared information critical to the decision in either the first half or second half of team interaction. The results suggest that teams assisted by the chatbot in the first half of the decision-making task made better decisions than those assisted by the chatbot in the second half. Analysis of team member perceptions and interaction processes suggests that having a chatbot at the beginning of team interaction may have generated a ripple effect in the team that promoted information sharing among team members.more » « less
- 
            The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.more » « less
- 
            Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process. To fully unlock the potential of AI- assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions, and utilized these models to improve human-AI team performance. Meanwhile, due to the “black-box” nature of AI models, providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice. In this paper, we explore whether we can quantitatively model how humans integrate both AI recommendations and explanations into their decision process, and whether this quantitative understanding of human behavior from the learned model can be utilized to manipulate AI explanations, thereby nudging individuals towards making targeted decisions. Our extensive human experiments across various tasks demonstrate that human behavior can be easily influenced by these manipulated explanations towards targeted outcomes, regardless of the intent being adversarial or benign. Furthermore, individuals often fail to detect any anomalies in these explanations, despite their decisions being affected by them.more » « less
- 
            Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process. To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions, and utilized these models to improve human-AI team performance. Meanwhile, due to the black-box'' nature of AI models, providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice. In this paper, we explore whether we can quantitatively model how humans integrate both AI recommendations and explanations into their decision process, and whether this quantitative understanding of human behavior from the learned model can be utilized to manipulate AI explanations, thereby nudging individuals towards making targeted decisions. Our extensive human experiments across various tasks demonstrate that human behavior can be easily influenced by these manipulated explanations towards targeted outcomes, regardless of the intent being adversarial or benign. Furthermore, individuals often fail to detect any anomalies in these explanations, despite their decisions being affected by them.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    