Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Generative AI (GenAI) has brought opportunities and challenges for higher education as it integrates into teaching and learning environments. As instructors navigate this new landscape, understanding their engagement with and attitudes toward GenAI is crucial. We surveyed 178 instructors from a single U.S. university to examine their current practices, perceptions, trust, and distrust of GenAI in higher education in March 2024. While most surveyed instructors reported moderate to high familiarity with GenAI-related concepts, their actual use of GenAI tools for direct instructional tasks remained limited. Our quantitative results show that trust and distrust in GenAI are related yet distinct; high trust does not necessarily imply low distrust, and vice versa. We also found significant differences in surveyed instructors' familiarity with GenAI across different trust and distrust groups. Our qualitative results show nuanced manifestations of trust and distrust among surveyed instructors and various approaches to support calibrated trust in GenAI. We discuss practical implications focused on (dis)trust calibration among instructors.more » « lessFree, publicly-accessible full text available June 1, 2026
- 
            Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools for mental well-being. With the increased integration of GenAI, it is important to examine individuals’ attitudes and trust in GenAI chatbots’ support for SA. Through a mixed-method approach that involved surveys (n = 159) and interviews (n = 17), we found that individuals with severe symptoms tended to trust and embrace GenAI chatbots more readily, valuing their non-judgmental support and perceived emotional comprehension. However, those with milder symptoms prioritized technical reliability. We identified factors influencing trust, such as GenAI chatbots’ ability to generate empathetic responses and its context-sensitive limitations, which were particularly important among individuals with SA. We also discuss the design implications and use of GenAI chatbots in fostering cognitive and emotional trust, with practical and design considerations.more » « lessFree, publicly-accessible full text available April 25, 2026
- 
            Significant advancements have occurred in the application of Large Language Models (LLMs) for social simulations. Despite this, their abilities to perform teaming in task-oriented social events are underexplored. Such capabilities are crucial if LLMs are to effectively mimic human-like social behaviors and form efficient teams to solve tasks. To bridge this gap, we introduce MetaAgents, a social simulation framework populated with LLM-based agents. MetaAgents facilitates agent engagement in conversations and a series of decision making within social contexts, serving as an appropriate platform for investigating interactions and interpersonal decision-making of agents. In particular, we construct a job fair environment as a case study to scrutinize the team assembly and skill-matching behaviors of LLM-based agents. We take advantage of both quantitative metrics evaluation and qualitative text analysis to assess their teaming abilities at the job fair. Our evaluation demonstrates that LLM-based agents perform competently in making rational decisions to develop efficient teams. However, we also identify limitations that hinder their effectiveness in more complex team assembly tasks. Our work provides valuable insights into the role and evolution of LLMs in task-oriented social simulations.more » « lessFree, publicly-accessible full text available April 1, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
