skip to main content


Title: Fluent Coordination in Proximate Human Robot Teaming
Fluent coordination is important in order for teams to work well together. In proximate teaming scenarios, fluent teams tend to perform more successfully. Recent work suggests robots can support fluency in human-robot teams a number of ways, including using nonverbal cues and anticipating human intention. However, this area of research is still in its early stages. We identify some of the key challenges in this research space, specifically individual variations during teaming, knowledge and task transfer, co-training prior to task execution, and long-term interactions. We then discuss possible paths forward, including leveraging human adaptability, to promote more fluent teaming.  more » « less
Award ID(s):
1734482
NSF-PAR ID:
10145650
Author(s) / Creator(s):
;
Date Published:
Journal Name:
In Proceedings of the Robotics, Science, and Systems (RSS) Workshop on AI and Its Alternatives for Shared Autonomy in Assistive and Collaborative Robotics
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust. 
    more » « less
  2. Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. 
    more » « less
  3. Collaborative robots promise to transform work across many industries and promote “human-robot teaming” as a novel paradigm. However, realizing this promise requires the understanding of how existing tasks, developed for and performed by humans, can be effectively translated into tasks that robots can singularly or human-robot teams can collaboratively perform. In the interest of developing tools that facilitate this process we present Authr, an end-to-end task authoring environment that assists engineers at manufacturing facilities in translating existing manual tasks into plans applicable for human-robot teams and simulates these plans as they would be performed by the human and robot. We evaluated Authr with two user studies, which demonstrate the usability and effectiveness of Authr as an interface and the benefits of assistive task allocation methods for designing complex tasks for human-robot teams. We discuss the implications of these findings for the design of software tools for authoring human-robot collaborative plans. 
    more » « less
  4. Human-robot teaming is becoming increasingly common within manufacturing processes. A key aspect practitioners need to decide on when developing effective processes is the level of task interdependence between human and robot team members. Task interdependence refers to the extent to which one’s behavior affects the performance of others in a team. In this work, we examine the effects of three levels of task interdependence—pooled, sequential, reciprocalin human-robot teaming on human worker’s mental states, task performance, and perceptions of the robot. Participants worked with the robot in an assembly task while their heart rate variability was being recorded. Results suggested human workers in the reciprocal interdependence level experienced less stress and perceived the robot more as a collaborator than other two levels. Task interdependence did not affect perceived safety. Our findings highlight the importance of considering task structure in human-robot teaming and inform future research on and industry practices for human-robot task allocation. 
    more » « less
  5. null (Ed.)
    Cooperative Co-evolutionary Algorithms effectively train policies in multiagent systems with a single, statically defined team. However, many real-world problems, such as search and rescue, require agents to operate in multiple teams. When the structure of the team changes, these policies show reduced performance as they were trained to cooperate with only one team. In this work, we solve the cooperation problem by training agents to fill the needs of an arbitrary team, thereby gaining the ability to support a large variety of teams. We introduce Ad hoc Teaming Through Evolution (ATTE) which evolves a limited number of policy types using fitness aggregation across multiple teams. ATTE leverages agent types to reduce the dimensionality of the interaction search space, while fitness aggregation across teams selects for more adaptive policies. In a simulated multi-robot exploration task, ATTE is able to learn policies that are effective in a variety of teaming schemes, improving the performance of CCEA by a factor of up to five times. 
    more » « less