Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.
This content will become publicly available on September 9, 2025
Previous research into trust dynamics in human-autonomy interaction has demonstrated that individuals exhibit specific patterns of trust when interacting repeatedly with automated systems. Moreover, people with different types of trust dynamics have been shown to differ across seven personal characteristic dimensions: masculinity, positive affect, extraversion, neuroticism, intellect, performance expectancy, and high expectations. In this study, we develop classification models aimed at predicting an individual’s trust dynamics type–categorized as Bayesian decision-maker, disbeliever, or oscillator–based on these key dimensions. We employed multiple classification algorithms including the random forest classifier, multinomial logistic regression, Support Vector Machine, XGBoost, and Naive Bayes, and conducted a comparative evaluation of their performance. The results indicate that personal characteristics can effectively predict the type of trust dynamics, achieving an accuracy rate of 73.1%, and a weighted average F1 score of 0.64. This study underscores the predictive power of personal traits in the context of human-autonomy interaction.
more » « less- PAR ID:
- 10541142
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 68
- Issue:
- 1
- ISSN:
- 1071-1813
- Format(s):
- Medium: X Size: p. 310-316
- Size(s):
- p. 310-316
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.more » « less
-
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.more » « less
-
Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.more » « less
-
Understanding the impact of operator characteristics on human-automation interaction (HAI) is crucial as automation becomes pervasive. Despite extensive HAI research, the association between operator characteristics and their dependence on automation has not been thoroughly examined. This study, therefore, examines how individual characteristics affect operator dependence behaviors when interacting with automation. Through a controlled experiment involving 52 participants in a dual-task scenario, we find that operators’ decision-making style, risk propensity, and agreeableness are associated with their dependence behaviors when using automation. This research illuminates the role of personal characteristics in HAI, facilitating personalized team interactions, trust building, and enhanced performance in automated settings.