skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation
ObjectiveWe examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BackgroundMost existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. MethodSeventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. ResultsOutcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. ConclusionHuman operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. ApplicationUnderstanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.  more » « less
Award ID(s):
2045009
PAR ID:
10517145
Author(s) / Creator(s):
; ;
Publisher / Repository:
Sage
Date Published:
Journal Name:
Human Factors: The Journal of the Human Factors and Ergonomics Society
Volume:
65
Issue:
5
ISSN:
0018-7208
Page Range / eLocation ID:
862 to 878
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding the impact of operator characteristics on human-automation interaction (HAI) is crucial as automation becomes pervasive. Despite extensive HAI research, the association between operator characteristics and their dependence on automation has not been thoroughly examined. This study, therefore, examines how individual characteristics affect operator dependence behaviors when interacting with automation. Through a controlled experiment involving 52 participants in a dual-task scenario, we find that operators’ decision-making style, risk propensity, and agreeableness are associated with their dependence behaviors when using automation. This research illuminates the role of personal characteristics in HAI, facilitating personalized team interactions, trust building, and enhanced performance in automated settings. 
    more » « less
  2. We conducted a meta-analysis to determine how people blindly comply with, rely on, and depend on diagnostic automation. We searched three databases using combinations of human behavior keywords with automation keywords. The period ranges from January 1996 to June 2021. In total, 8 records and a total of 68 data points were identified. As data points were nested within research records, we built multi-level models (MLM) to quantify relationships between blind compliance and positive predictive value (PPV), blind reliance and negative predictive value (NPV), and blind dependence and overall success likelihood (OSL).Results show that as the automation’s PPV, NPV, and OSL increase, human operators are more likely to blindly follow the automation’s recommendation. Operators appear to adjust their reliance behaviors more than their compliance and dependence. We recommend that researchers report specific automation trial information (i.e., hits, false alarms, misses, and correct rejections) and human behaviors (compliance and reliance) rather than automation OSL and dependence. Future work could examine how operator behaviors change when operators are not blind to raw data. Researchers, designers, and engineers could leverage understanding of operator behaviors to inform training procedures and to benefit individual operators during repeated automation use. 
    more » « less
  3. A cornerstone of human intelligence is the ability to flexibly adjust our cognition and behavior as our goals change. For instance, achieving some goals requires efficiency, while others require caution. Adapting to these changing goals require corresponding adjustments in cognitive control (e.g., levels of attention, response thresholds). However, adjusting our control to meet new goals comes at a cost: we are better at achieving a goal in isolation than when transitioning between goals. The source of thesecontrol adjustment costsremains poorly understood, and the bulk of our understanding of such costs comes from settings in which participants transition between discrete task sets, rather than performance goals. Across four experiments, we show that adjustments in continuous control states incur a performance cost, and that a dynamical systems model can explain the source of these costs. Participants performed a single cognitively demanding task under varying performance goals (e.g., to be fast or to be accurate). We modeled control allocation to include a dynamic process of adjusting from one’s current control state to a target state for a given performance goal. By incorporating inertia into this adjustment process, our model accounts for our empirical findings that people under-shoot their target control state more (i.e., exhibit larger adjustment costs) when (a) goals switch rather than remain fixed over a block (Study 1); (b) target control states are more distant from one another (Study 2); (c) less time is given to adjust to the new goal (Study 3); and (d) when anticipating having to switch goals more frequently (Study 4). Our findings characterize the costs of adjusting control to meet changing goals, and show that these costs can emerge directly from cognitive control dynamics. In so doing, they shed new light on the sources of and constraints on flexibility in human goal-directed behavior. 
    more » « less
  4. null (Ed.)
    Microgrid systems can provide extensive information using their measurement units to the operators. As microgrid systems become more pervasive, there will be a need to adjust the information an operator requires to provide an optimized user-interface. In this paper, a combinatorial optimization strategy is used to provide an optimal user-interface for the microgrid operator that selects information for display depending on the operator's trust level in the system, and the assigned task. We employ a method based on sensor placement by capturing elements of the interface as different sensors, that find an optimal set of sensors via combinatorial optimization. However, the typical inverter-based microgrid model poses challenges for the combinatorial optimization due to its poor conditioning. To combat the poor conditioning, we decompose the model into its slow and fast dynamics, and focus solely on the slow dynamics, which are more well conditioned. We presume the operator is tasked with monitoring phase angle and active and reactive power control of inverter-based distributed generators. We synthesize user-interface for each of these tasks under a wide range of trust levels, ranging from full trust to no trust. We found that, as expected, more information must be included in the interface when the operator has low trust. Further, this approach exploits the dynamics of the underlying microgrid to minimize information content (to avoid overwhelming the operator). The effectiveness of proposed approach is verified by modeling an inverter-based microgrid in Matlab. 
    more » « less
  5. Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time. 
    more » « less