Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the general public may hesitate to embrace this technology. This research seeks to investigate contextualized trust profiles in order to create personalized experiences for drivers in AVs with varying levels of reliability. A driving simulator experiment involving 70 participants revealed three distinct contextualized trust profiles (i.e., confident copilots, myopic pragmatists, and reluctant automators) identified through K-means clustering, and analyzed in relation to drivers' dynamic trust, dispositional trust, initial learned trust, personality traits, and emotions. The experiment encompassed eight scenarios where participants were requested to take over control from the AV in three conditions: a control condition, a false alarm condition, and a miss condition. To validate the models, a multinomial logistic regression model was constructed using the shapley additive explanations explainer to determine the most influential features in predicting contextualized trust profiles, achieving an F1-score of 0.90 and an accuracy of 0.89. In addition, an examination of how individual factors impact contextualized trust profiles provided valuable insights into trust dynamics from a user-centric perspective. The outcomes of this research hold significant implications for the development of personalized in-vehicle trust monitoring and calibration systems to modulate drivers' trust levels, thereby enhancing safety and user experience in automated driving.
more »
« less
Anticipated emotions associated with trust in autonomous vehicles
Trust in automation has been mainly studied in the cognitive perspective, though some researchers have shown that trust is also influenced by emotion. Therefore, it is essential to investigate the relationships between emotions and trust. In this study, we explored the pattern of 19 anticipated emotions associated with two levels of trust (i.e., low vs. high levels of trust) elicited from two levels of autonomous vehicles (AVs) performance (i.e., failure and non-failure) from 105 participants from Amazon Mechanical Turk (AMT). Trust was assessed at three layers i.e., dispositional, initial learned, and situational trust. The study was designed to measure how emotions are affected with low and high levels of trust. Situational trust was significantly correlated with emotions that a high level of trust significantly improved participants’ positive emotions, and vice versa. We also identified the underlying factors of emotions associated with situational trust. Our results offered important implications on anticipated emotions associated with trust in AVs.
more »
« less
- Award ID(s):
- 2138274
- PAR ID:
- 10383403
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 66
- Issue:
- 1
- ISSN:
- 2169-5067
- Page Range / eLocation ID:
- 199 to 203
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Autonomous Vehicle (AV) technology has the potential to significantly improve driver safety. Unfortunately, driver could be reluctant to ride with AVs due to the lack of trust and acceptance of AV’s driving styles. The present study investigated the impact of driver’s driving style (aggressive/defensive) and the designed driving styles of AVs (aggressive/defensive) on driver’s trust, acceptance, and take-over behavior in fully autonomous vehicles. Thirty-two participants were classified into two groups based on their driving styles using the Aggressive Driving Scale and experienced twelve scenarios in either an aggressive AV or a defensive AV. Results revealed that drivers’ trust, acceptance, and takeover frequency were significantly influenced by the interaction effects between AV’s driving style and driver’s driving style. The findings implied that driver’s individual differences should be considered in the design of AV’s driving styles to enhance driver’s trust and acceptance of AVs and reduce undesired take over behaviors.more » « less
-
Facial expressions of emotions by people with visual impairment and blindness via video conferencingMany people including those with visual impairment and blindness take advantage of video conferencing tools to meet people. Video conferencing tools enable them to share facial expressions that are considered as one of the most important aspects of human communication. This study aims to advance knowledge of how those with visual impairment and blindness share their facial expressions of emotions virtually. This study invited a convenience sample of 28 adults with visual impairment and blindness to Zoom video conferencing. The participants were instructed to pose facial expressions of basic human emotions (anger, fear, disgust, happiness, surprise, neutrality, calmness, and sadness), which were video recorded. The facial expressions were analyzed using the Facial Action Coding System (FACS) that encodes the movement of specific facial muscles called Action Units (AUs). This study found that there was a particular set of AUs significantly engaged in expressing each emotion, except for sadness. Individual differences were also found in AUs influenced by the participants’ visual acuity levels and emotional characteristics such as valence and arousal levels. The research findings are anticipated to serve as the foundation of knowledge, contributing to developing emotion-sensing technologies for those with visual impairment and blindness.more » « less
-
As the influence of social robots in people’s daily lives grows, research on understanding people’s perception of robots including sociability, trust, acceptance, and preference becomes more pervasive. Research has considered visual, vocal, or tactile cues to express robots’ emotions, whereas little research has provided a holistic view in examining the interactions among different factors influencing emotion perception. We investigated multiple facets of user perception on robots during a conversational task by varying the robots’ voice types, appearances, and emotions. In our experiment, 20 participants interacted with two robots having four different voice types. While participants were reading fairy tales to the robot, the robot gave vocal feedback with seven emotions and the participants evaluated the robot’s profiles through post surveys. The results indicate that (1) the accuracy of emotion perception differed depending on presented emotions, (2) a regular human voice showed higher user preferences and naturalness, (3) but a characterized voice was more appropriate for expressing emotions with significantly higher accuracy in emotion perception, and (4) participants showed significantly higher emotion recognition accuracy with the animal robot than the humanoid robot. A follow-up study ([Formula: see text]) with voice-only conditions confirmed that the importance of embodiment. The results from this study could provide the guidelines needed to design social robots that consider emotional aspects in conversations between robots and users.more » « less
-
Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time.more » « less