skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The DOI auto-population feature in the Public Access Repository (PAR) will be unavailable from 4:00 PM ET on Tuesday, July 8 until 4:00 PM ET on Wednesday, July 9 due to scheduled maintenance. We apologize for the inconvenience caused.


Title: Towards Meaningfully Integrating Human-Autonomy Teaming in Applied Settings
Technological advancement goes hand in hand with economic advancement, meaning applied industries like manufacturing, medicine, and retail are set to leverage new practices like human-autonomy teams. These human-autonomy teams call for deep integration between artificial intelligence and the human workers that make up a majority of the workforce. This paper identifies the core principles of the human-autonomy teaming literature relevant to the integration of human-autonomy teams in applied contexts and research due to this large scale implementation of human-autonomy teams. A framework is built and defined from these fundamental concepts, with specific examples of its use in applied contexts and the interactions between various components of the framework. This framework can be utilized by practitioners of human-autonomy teams, allowing them to make informed decisions regarding the integration and training of human-autonomy teams.  more » « less
Award ID(s):
1829008
PAR ID:
10284455
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 8th International Conference on Human-Agent Interaction
Page Range / eLocation ID:
149 to 156
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This research explores the potential of shared leadership as a complementary approach to function allocation within Human-Autonomy Teams (HATs), particularly in large, multi-agent contexts. Through a literature review of shared leadership in all-human organizational contexts, conducted using three databases, Web of Science, Engineering Village, and Google Scholar, the article identifies and outlines two mechanisms in shared leadership—decentralization and mutual influence—that appear promising for improving team processes and outcomes in HATs. This research underscores the need for developing frameworks or engineering guidance incorporating these mechanisms to enhance adaptability and performance in complex and dynamic environments. 
    more » « less
  2. Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. 
    more » « less
  3. In this paper, an adjustable autonomy framework is proposed for Human-Robot Collaboration (HRC) in which a robot uses a reinforcement learning mechanism guided by a human operator's rewards in an initially unknown workspace. Within the proposed framework, the robot can adjust its autonomy level in an HRC setting that is represented by a Markov Decision Process. A novel Q-learning mechanism with an integrated greedy approach is implemented for robot learning to capture the correct actions and the robot's mistakes for adjusting its autonomy level. The proposed HRC framework can adapt to changes in the workspace, and can adjust the autonomy level, provided consistent human operator's reward. The developed algorithm is applied to a realistic HRC setting, involving a Baxter humanoid robot. The experimental results confirm the capability of the developed framework to successfully adjust the robot's autonomy level in response to changes in the human operator's commands or the workspace. 
    more » « less
  4. ObjectiveThis study examines low-, medium-, and high-performing Human-Autonomy Teams’ (HATs’) communication strategies during various technological failures that impact routine communication strategies to adapt to the task environment. BackgroundTeams must adapt their communication strategies during dynamic tasks, where more successful teams make more substantial adaptations. Adaptations in communication strategies may explain how successful HATs overcome technological failures. Further, technological failures of variable severity may alter communication strategies of HATs at different performance levels in their attempts to overcome each failure. MethodHATs in a Remotely Piloted Aircraft System-Synthetic Task Environment (RPAS-STE), involving three team members, were tasked with photographing targets. Each triad had two randomly assigned participants in navigator and photographer roles, teaming with an experimenter who simulated an AI pilot in a Wizard of Oz paradigm. Teams encountered two different technological failures, automation and autonomy, where autonomy failures were more challenging to overcome. ResultsHigh-performing HATs calibrated their communication strategy to the complexity of the different failures better than medium- and low-performing teams. Further, HATs adjusted their communication strategies over time. Finally, only the most severe failures required teams to increase the efficiency of their communication. ConclusionHAT effectiveness under degraded conditions depends on the type of communication strategies enacted by the team. Previous findings from studies of all-human teams apply here; however, novel results suggest information requests are particularly important to HAT success during failures. ApplicationUnderstanding the communication strategies of HATs under degraded conditions can inform training protocols to help HATs overcome failures. 
    more » « less
  5. Robots such as unmanned aerial vehicles (UAVs) deployed for search and rescue (SAR) can explore areas where human searchers cannot easily go and gather information on scales that can transform SAR strategy. Multi-UAV teams therefore have the potential to transform SAR by augmenting the capabilities of human teams and providing information that would otherwise be inaccessible. Our research aims to develop new theory and technologies for field deploying autonomous UAVs and managing multi-UAV teams working in concert with multi-human teams for SAR. Specifically, in this paper we summarize our work in progress towards these goals, including: (1) a multi-UAV search path planner that adapts to human behavior; (2) an in-field distributed computing prototype that supports multi-UAV computation and communication; (3) behavioral modeling that fields spatially localized predictions of lost person location; and (4) an interface between human searchers and UAVs that facilitates human-UAV interaction over a wide range of autonomy. 
    more » « less