Teamwork is a set of interrelated reasoning, actions and behaviors of team members that facilitate common objectives. Teamwork theory and experiments have resulted in a set of states and processes for team effectiveness in both human-human and agent-agent teams. However, human-agent teaming is less well studied because it is so new and involves asymmetry in policy and intent not present in human teams. To optimize team performance in human-agent teaming, it is critical that agents infer human intent and adapt their polices for smooth coordination. Most literature in human-agent teaming builds agents referencing a learned human model. Though these agents are guaranteed to perform well with the learned model, they lay heavy assumptions on human policy such as optimality and consistency, which is unlikely in many real-world scenarios. In this paper, we propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game, namely Team Space Fortress (TSF). Previous human-human team research have shown complementary policies in TSF game and diversity in human players’ skill, which encourages us to relax the assumptions on human policy. Therefore, we discard learning human models from human data, and instead use an adaptation strategy on a pre-trained library of exemplar policies composed of RL algorithms or rule-based methods with minimal assumptions of human behavior. The adaptation strategy relies on a novel similarity metric to infer human policy and then selects the most complementary policy in our library to maximize the team performance. The adaptive agent architecture can be deployed in real-time and generalize to any off-the-shelf static agents. We conducted human-agent experiments to evaluate the proposed adaptive agent framework, and demonstrated the suboptimality, diversity, and adaptability of human policies in human-agent teams. 
                        more » 
                        « less   
                    This content will become publicly available on July 15, 2026
                            
                            Learning to Incentivize in Repeated Principal-Agent Problems with Adversarial Agent Arrivals
                        
                    More Like this
- 
            
- 
            With multi-agent teams becoming more of a reality every day, it is important to create a common design model for multi-agent teams. These teams need to be able to function in dynamic environments and still communicate with any humans that may need a problem solved. Existing human-agent research can be used to purposefully create multi-agent teams that are interdependent but can still interact with humans. Rather than creating dynamic agents, the most effective way to overcome the dynamic nature of modern workloads is to create a dynamic team configuration, rather than individual member-agents that can change their roles. Multi-agent teams will require a variety of agents to be designed to cover a diverse subset of problems that need to be solved in the modern workforce. A model based on existing multi-agent teams that satisfies the needs of human-agent teams has been created to serve as a baseline for human-interactive multi-agent teams.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
