skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation
ObjectiveWe examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BackgroundMost existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. MethodSeventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. ResultsOutcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. ConclusionHuman operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. ApplicationUnderstanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.  more » « less
Award ID(s):
2045009
PAR ID:
10517145
Author(s) / Creator(s):
; ;
Publisher / Repository:
Sage
Date Published:
Journal Name:
Human Factors: The Journal of the Human Factors and Ergonomics Society
Volume:
65
Issue:
5
ISSN:
0018-7208
Page Range / eLocation ID:
862 to 878
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding the impact of operator characteristics on human-automation interaction (HAI) is crucial as automation becomes pervasive. Despite extensive HAI research, the association between operator characteristics and their dependence on automation has not been thoroughly examined. This study, therefore, examines how individual characteristics affect operator dependence behaviors when interacting with automation. Through a controlled experiment involving 52 participants in a dual-task scenario, we find that operators’ decision-making style, risk propensity, and agreeableness are associated with their dependence behaviors when using automation. This research illuminates the role of personal characteristics in HAI, facilitating personalized team interactions, trust building, and enhanced performance in automated settings. 
    more » « less
  2. Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time. 
    more » « less
  3. Recent advances in construction automation increased the need for cooperation between workers and robots, where workers have to face both success and failure in human-robot collaborative work, ultimately affecting their trust in robots. This study simulated a worker-robot bricklaying collaborative task to examine the impacts of blame targets (responsibility attributions) on trust and trust transfer in multi-robots-human interaction. The findings showed that workers’ responsibility attributions to themselves or robots significantly affect their trust in the robot. Further, in a multi-robots-human interaction, observing one robot’s failure to complete the task will affect the trust in the other devices, aka., trust transfer. 
    more » « less
  4. We conducted a meta-analysis to determine how people blindly comply with, rely on, and depend on diagnostic automation. We searched three databases using combinations of human behavior keywords with automation keywords. The period ranges from January 1996 to June 2021. In total, 8 records and a total of 68 data points were identified. As data points were nested within research records, we built multi-level models (MLM) to quantify relationships between blind compliance and positive predictive value (PPV), blind reliance and negative predictive value (NPV), and blind dependence and overall success likelihood (OSL).Results show that as the automation’s PPV, NPV, and OSL increase, human operators are more likely to blindly follow the automation’s recommendation. Operators appear to adjust their reliance behaviors more than their compliance and dependence. We recommend that researchers report specific automation trial information (i.e., hits, false alarms, misses, and correct rejections) and human behaviors (compliance and reliance) rather than automation OSL and dependence. Future work could examine how operator behaviors change when operators are not blind to raw data. Researchers, designers, and engineers could leverage understanding of operator behaviors to inform training procedures and to benefit individual operators during repeated automation use. 
    more » « less
  5. Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. 
    more » « less