Effective human-AI collaboration requires agents to adopt their roles and levels of support based on human needs, task requirements, and complexity. Traditional human-AI teaming often relies on a pre-determined robot communication scheme, restricting teamwork adaptability in complex tasks. Leveraging the strong communication capabilities of Large Language Models (LLMs), we propose a Human-Robot Teaming Framework with Multi-Modal Language feedback (HRT-ML), a framework designed to enhance human-robot interaction by adjusting the frequency and content of language-based feedback. The HRT-ML framework includes two core modules: a Coordinator for high-level, low-frequency strategic guidance and a Manager for task-specific, high-frequency instructions, enabling passive and active interactions with human teammates. To assess the impact of language feedback in collaborative scenarios, we conducted experiments in an enhanced Overcooked-AI game environment with varying levels of task complexity (easy, medium, hard) and feedback frequency (inactive, passive, active, superactive). Our results show that as task complexity increases relative to human capabilities, human teammates exhibited stronger preferences toward robotic agents that can offer frequent, proactive support. However, when task complexities exceed the LLM's capacity, noisy and inaccurate feedback from superactive agents can instead hinder team performance, as it requires human teammates to increase their effort to interpret and respond to the large amount of communications, with limited performance return. Our results offer a general principle for robotic agents to dynamically adjust their levels and frequencies of communication to work seamlessly with humans and achieve improved teaming performance.
more »
« less
A Minecraft Based Simulated Task Environment for Human AI Teaming
In this extended abstract we present the design, development, and evaluation of a Minecraft-based simulated task environment to conduct human and AI teaming research. With the deluge of AI-driven applications and their infiltration into many activities of daily living, it is becoming necessary to look at ways that humans and AI can work together. There is a tremendous research burden associated with accurately evaluating the best practices and trade-offs when humans and AI have to collaborate together in completing critical tasks. Minecraft offers a low-cost alternative as an early investigating tool for researchers to build answers to emerging research questions before significantly investing in human-AI teaming activities in the real world. We demonstrate successfully via a simple rule-based AI, insights that could highly influence human-AI teaming activities can be derived to improve practical and viable development of protocols and procedures. Our findings indicate that simulated task environments play a critical role in furthering human AI teaming activities.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10517472
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents
- ISBN:
- 9781450399944
- Page Range / eLocation ID:
- 1 to 3
- Format(s):
- Medium: X
- Location:
- Würzburg Germany
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Fluent coordination is important in order for teams to work well together. In proximate teaming scenarios, fluent teams tend to perform more successfully. Recent work suggests robots can support fluency in human-robot teams a number of ways, including using nonverbal cues and anticipating human intention. However, this area of research is still in its early stages. We identify some of the key challenges in this research space, specifically individual variations during teaming, knowledge and task transfer, co-training prior to task execution, and long-term interactions. We then discuss possible paths forward, including leveraging human adaptability, to promote more fluent teaming.more » « less
-
Navigation is critical for everyday tasks but is especially important for urban search and rescue (USAR) contexts. Aside from successful navigation, individuals must also be able to effectively communicate spatial information. This study investigates how differences in spatial ability affected overall performance in a USAR task in a simulated Minecraft environment and the effectiveness of an individual’s ability to communicate their location verbally. Randomly selected participants were asked to rescue as many victims as possible in three 10-minute missions. Results showed that sense of direction may not predict the ability to communicate spatial information, and that the skill of processing spatial information may be distinct from the ability to communicate spatial information to others. We discuss the implications of these findings for teaming contexts that involve both processes.more » « less
-
High-stress environments, such as a NASA Control Room, require optimal task performance, as a single mistake may cause monetary loss or the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either»s task performance. Workload is directly correlated with task performance; thus, a robot may use a human»s workload state to modify its interactions with the human. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented.more » « less
-
Abstract Demands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human–AI teaming perspectives on AI development similarly underscore. Co‐development strategies may also help reconcile efforts to develop performance‐based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences.more » « less
An official website of the United States government

