skip to main content


Title: The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.  more » « less
Award ID(s):
1828010
NSF-PAR ID:
10344406
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust. 
    more » « less
  2. This research examines the relationship between anticipatory pushing of information and trust in human– autonomy teaming in a remotely piloted aircraft system - synthetic task environment. Two participants and one AI teammate emulated by a confederate executed a series of missions under routine and degraded conditions. We addressed the following questions: (1) How do anticipatory pushing of information and trust change from human to human and human to autonomous team members across the two sessions? and (2) How is anticipatory pushing of information associated with the trust placed in a teammate across the two sessions? This study demonstrated two main findings: (1) anticipatory pushing of information and trust differed between human-human and human-AI dyads, and (2) anticipatory pushing of information and trust scores increased among human-human dyads under degraded conditions but decreased in human-AI dyads. 
    more » « less
  3. While there is increased interest in how trust spreads in Human Autonomy Teams (HATs), most trust measurements are subjective and do not examine real-time changes in trust. To develop a trust metric that consists of objective variables influenced by trust/distrust manipulations, we conducted an Interactive hybrid Cognitive Task Analysis (IhCTA) for a Remotely Piloted Aerial System (RPAS) HAT. The IhCTA adapted parts of the hybrid Cognitive Task Analysis (hCTA) framework. In this paper, we present the four steps of the IhCTA approach, including 1) generating a scenario task overview, 2) generating teammate-specific event flow diagrams, 3) identifying interactions and interdependencies impacted by trust/distrust manipulations, and 4) processing RPAS variables based on the IhCTA to create a metric. We demonstrate the application of the metric through a case study that examines how the influence of specific interactions on team state changes before and after the spread of distrust.

     
    more » « less
  4. Resilient teams overcome sudden, dynamic changes by enacting rapid, adaptive responses that maintain system effectiveness. We analyzed two experiments on human-autonomy teams (HATs) operating a simulated remotely piloted aircraft system (RPAS) and correlated dynamical measures of resilience with measures of team performance. Across both experiments, HATs experienced automation and autonomy failures, using a Wizard of Oz paradigm. Team performance was measured in multiple ways, using a mission-level performance score, a target processing efficiency score, a failure overcome score, and a ground truth resilience score. Novel dynamical systems metrics of resilience measured the timing of system reorganization in response to failures across RPAS layers, including vehicle, controls, communications layers, and the system overall. Time to achieve extreme values of reorganization and novelty of reorganization were consistently correlated with target processing efficiency and ground truth resilience across both studies. Correlations with mission-level performance and the overcome score were apparent but less consistent. Across both studies, teams displayed greater system reorganization during failures compared to routine task conditions. The second experiment revealed differential effects of team training focused on coordination coaching and trust calibration. These results inform the measurement and training of resilience in HATs using objective, real-time resilience analysis.

     
    more » « less
  5. Effective human-human and human-autonomy teamwork is critical but often challenging to perfect. The challenge is particularly relevant in time-critical domains, such as healthcare and disaster response, where the time pressures can make coordination increasingly difficult to achieve and the consequences of imperfect coordination can be severe. To improve teamwork in these and other domains, we present TIC: an automated intervention approach for improving coordination between team members. Using BTIL, a multi-agent imitation learning algorithm, our approach first learns a generative model of team behavior from past task execution data. Next, it utilizes the learned generative model and team's task objective (shared reward) to algorithmically generate execution-time interventions. We evaluate our approach in synthetic multi-agent teaming scenarios, where team members make decentralized decisions without full observability of the environment. The experiments demonstrate that the automated interventions can successfully improve team performance and shed light on the design of autonomous agents for improving teamwork. 
    more » « less