Flood risk communication is imperative to aiding people’s decision making in flood situations. These warnings can be communicated through navigation applications on mobile devices. The current study investigated how flood-depth information affected drivers’ actions given flood warnings from a mobile navigation application in a driving simulator. This study manipulated the type of flood warning presented to the participants in the driving scenarios and measured their actions given a potentially flooded roadway. Participants experienced six drives with different flood warning conditions. Results indicated that providing flood depth information helped drivers accurately estimate the depth of the flood and their perceived risks; including more detailed information was helpful for drivers to make informed decisions regarding a flooded roadway. We suggest that designers include flood depth information to help drivers accurately perceive the depth and risk regarding a flooded roadway.
The increasing threat of inland flooding due to precipitation changes and floodplain development necessitates efficient real-time flood detection and communication methods. While automated floodwarning systems facilitate such communication, they are susceptible to errors like false alarms and misses, which could undermine drivers’ trust during flood events. This study examined how system accuracy and error type impact perceived system reliability, as well as drivers’ trust and behaviors. Our results showed that both false alarms and misses lowered drivers’ perceived system reliability, and drivers were more inclined to follow recommendations from a system with higher reliability compared to one with low reliability. Misses and false alarms influenced drivers’ reliance and compliance behaviors differently. These findings help predict how system reliability level and error type shape drivers’ responses to automated flood-warning systems, potentially contributing to their design and calibration.
more » « less- PAR ID:
- 10538146
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 68
- Issue:
- 1
- ISSN:
- 1071-1813
- Format(s):
- Medium: X Size: p. 859-860
- Size(s):
- p. 859-860
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Objective This study investigated the impact of driving styles of drivers and automated vehicles (AVs) on drivers’ perception of automated driving maneuvers and quantified the relationships among drivers’ perception of AV maneuvers, driver trust, and acceptance of AVs.
Background Previous studies on automated driving styles focused on the impact of AV’s global driving style on driver’s attitude and driving performance. However, research on drivers’ perception of automated driving maneuvers at the specific driving style level is still lacking.
Method Sixteen aggressive drivers and sixteen defensive drivers were recruited to experience twelve driving scenarios in either an aggressive AV or a defensive AV on the driving simulator. Their perception of AV maneuvers, trust, and acceptance was measured via questionnaires, and driving performance was collected via the driving simulator.
Results Results revealed that drivers’ trust and acceptance of AVs would decrease significantly if they perceived AVs to have a higher speed, larger deceleration, smaller deceleration, or shorter stopping distance than expected. Moreover, defensive drivers perceived significantly greater inappropriateness of these maneuvers from aggressive AVs than defensive AVs, whereas aggressive drivers didn’t differ significantly in their perceived inappropriateness of these maneuvers with different driving styles.
Conclusion The driving styles of automated vehicles and drivers influenced drivers’ perception of automated driving maneuvers, which influence their trust and acceptance of AVs.
Application This study suggested that the design of AVs should consider drivers’ perceptions of automated driving maneuvers to avoid undermining drivers’ trust and acceptance of AVs.
-
Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time.more » « less
-
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.more » « less
-
We conducted a meta-analysis to determine how people blindly comply with, rely on, and depend on diagnostic automation. We searched three databases using combinations of human behavior keywords with automation keywords. The period ranges from January 1996 to June 2021. In total, 8 records and a total of 68 data points were identified. As data points were nested within research records, we built multi-level models (MLM) to quantify relationships between blind compliance and positive predictive value (PPV), blind reliance and negative predictive value (NPV), and blind dependence and overall success likelihood (OSL).Results show that as the automation’s PPV, NPV, and OSL increase, human operators are more likely to blindly follow the automation’s recommendation. Operators appear to adjust their reliance behaviors more than their compliance and dependence. We recommend that researchers report specific automation trial information (i.e., hits, false alarms, misses, and correct rejections) and human behaviors (compliance and reliance) rather than automation OSL and dependence. Future work could examine how operator behaviors change when operators are not blind to raw data. Researchers, designers, and engineers could leverage understanding of operator behaviors to inform training procedures and to benefit individual operators during repeated automation use.more » « less