Transitions of control are an important safety concern for human-automation teams and automated vehicle safety. While trust and situation awareness have been observed to influence transitions of control in automated vehicles, there are few objective measurements, making these concepts difficult to operationalize in increasingly automated decision systems. In this study, we take a step towards quantifying trust by mapping latent driver beliefs extracted from an active inference-factor analysis model of driver behavior and cognitive dynamics to subjective responses to trust questionnaires. Our results show that subjective trust is primarily correlated with model parameters affecting perceptual evidence accumulation rate, and the same parameters are significantly correlated with driver age.
more »
« less
Understanding Reliance and Trust in Decision Aids for UAV Target Identification
The use of automation is prevalent in almost every aspect of modern life, and since its inception researchers have been investigating trust in automation. There are many methods of measuring trust. Given that trust means different things to different people and by nature is subjective, most methods are subjective survey assessments (Freedy, DeVisser, Weltman, & Coeyman, 2007; Jian, Bisantz, & Drury, 2000). Many studies have investigated how the reliability of an automated agent or the level of automation changes subjective trust in the automation (Dixon & Wickens, 2006; Du, Zhang, & Yang, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Rogers, Khasawneh, Bertrand, & Madathil, 2017).
more »
« less
- Award ID(s):
- 1700621
- PAR ID:
- 10157310
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 63
- Issue:
- 1
- ISSN:
- 2169-5067
- Page Range / eLocation ID:
- 1953 to 1954
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.more » « less
-
ObjectiveThis study explores subjective and objective driving style similarity to identify how similarity can be used to develop driver-compatible vehicle automation. BackgroundSimilarity in the ways that interaction partners perform tasks can be measured subjectively, through questionnaires, or objectively by characterizing each agent’s actions. Although subjective measures have advantages in prediction, objective measures are more useful when operationalizing interventions based on these measures. Showing how objective and subjective similarity are related is therefore prudent for aligning future machine performance with human preferences. MethodsA driving simulator study was conducted with stop-and-go scenarios. Participants experienced conservative, moderate, and aggressive automated driving styles and rated the similarity between their own driving style and that of the automation. Objective similarity between the manual and automated driving speed profiles was calculated using three distance measures: dynamic time warping, Euclidean distance, and time alignment measure. Linear mixed effects models were used to examine how different components of the stopping profile and the three objective similarity measures predicted subjective similarity. ResultsObjective similarity using Euclidean distance best predicted subjective similarity. However, this was only observed for participants’ approach to the intersection and not their departure. ConclusionDeveloping driving styles that drivers perceive to be similar to their own is an important step toward driver-compatible automation. In determining what constitutes similarity, it is important to (a) use measures that reflect the driver’s perception of similarity, and (b) understand what elements of the driving style govern subjective similarity.more » « less
-
Abstract Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aims at attributing the network’s prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (such as MNIST or ImageNet for image classification). However, an objective, theoretically derived ground truth for the attribution is lacking for most of these datasets, making the assessment of XAI in many cases subjective. Also, benchmark datasets specifically designed for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a large benchmark dataset and train a fully connected network to learn the underlying function that was used for simulation. We then compare estimated heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for more objective assessment and accurate implementation of XAI methods, which will increase model trust and assist in discovering new science.more » « less
-
ObjectiveWe examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BackgroundMost existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. MethodSeventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. ResultsOutcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. ConclusionHuman operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. ApplicationUnderstanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.more » « less
An official website of the United States government

