Urban Search and Rescue (USAR) missions often involve a need to complete tasks in hazardous environments. In such situations, human-robot teams (HRT) may be essential tools for future USAR missions. Transparency and explanation are two information exchange processes where transparency is real-time information exchange and explanation is not. For effective HRTs, certain levels of transparency and explanation must be met, but how can these modes of team communication be operationalized? During the COVID-19 pandemic, our approach to answering this question involved an iterative design process that factored in our research objectives as inputs and pilot studies with remote participants. Our final research testbed design resulted in converting an in-person task environment to a completely remote study and task environment. Changes to the study environment included: utilizing user-friendly video conferencing tools such as Zoom and a custom-built application for research administration tasks and improved modes of HRT communication that helped us avoid confounding our performance measures.
more »
« less
Impact of Transparency and Explanations on Trust and Situation Awareness in Human–Robot Teams
Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10432726
- Date Published:
- Journal Name:
- Journal of Cognitive Engineering and Decision Making
- Volume:
- 17
- Issue:
- 1
- ISSN:
- 1555-3434
- Page Range / eLocation ID:
- 75 to 93
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Virtual testbeds are fundamental to the success of research on cognitive work in safety-critical domains. A testbed that can meet researchers' objectives and create a sense of reality for participants positively impacts the research process; they have the potential to allow researchers to address questions not achievable in physical environments. This paper discusses the development of a synthetic task environment (STE) for Urban Search and Rescue (USAR) to advance the boundaries of Human-Robot Teams (HRTs) using Roblox. Virtual testbeds can simulate USAR task environments and HRT interactions. After assessing alternative STE platforms, we discovered Roblox not only met our research capabilities but also would prove invaluable for research teams without substantial coding experience. This paper outlines the design process of creating an STE to meet our research team's objectives.more » « less
-
A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems.more » « less
-
Trust plays a critical role in the success of human-robot teams (HRTs). While typically studied as a perceptual attitude, trust also encompasses individual dispositions and interactive behaviors like compliance. Anthropomorphism, the attribution of human-like qualities to robots, is a related phenomenon that designers often leverage to positively influence trust. However, the relationship of anthropomorphism to perceptual, dispositional, and behavioral trust is not fully understood. This study explores how anthropomorphism moderates these relationships in a virtual urban search and rescue HRT scenario. Our findings indicate that the moderating effects of anthropomorphism depend on how a robot’s recommendations and its confidence in them are communicated through text and graphical information. These results highlight the complexity of the relationships between anthropomorphism, trust, and the social conveyance of information in designing for safe and effective human-robot teaming.more » « less
-
Effective human-AI collaboration requires agents to adopt their roles and levels of support based on human needs, task requirements, and complexity. Traditional human-AI teaming often relies on a pre-determined robot communication scheme, restricting teamwork adaptability in complex tasks. Leveraging the strong communication capabilities of Large Language Models (LLMs), we propose a Human-Robot Teaming Framework with Multi-Modal Language feedback (HRT-ML), a framework designed to enhance human-robot interaction by adjusting the frequency and content of language-based feedback. The HRT-ML framework includes two core modules: a Coordinator for high-level, low-frequency strategic guidance and a Manager for task-specific, high-frequency instructions, enabling passive and active interactions with human teammates. To assess the impact of language feedback in collaborative scenarios, we conducted experiments in an enhanced Overcooked-AI game environment with varying levels of task complexity (easy, medium, hard) and feedback frequency (inactive, passive, active, superactive). Our results show that as task complexity increases relative to human capabilities, human teammates exhibited stronger preferences toward robotic agents that can offer frequent, proactive support. However, when task complexities exceed the LLM's capacity, noisy and inaccurate feedback from superactive agents can instead hinder team performance, as it requires human teammates to increase their effort to interpret and respond to the large amount of communications, with limited performance return. Our results offer a general principle for robotic agents to dynamically adjust their levels and frequencies of communication to work seamlessly with humans and achieve improved teaming performance.more » « less