This paper discusses three points inspired by Skraaning and Jamieson’s perspective on automation failure: (a) the limitations of the automation failure concept with expanding system boundaries; (b) parallels between the failure to grasp automation failure and the failure to grasp trust in automation; (c) benefits of taking a pluralistic approach to definitions in sociotechnical systems science. While a taxonomy of automation-involved failures may not directly improve our understanding of how to prevent those failures, it could be instrumental for identifying hazards during test and evaluation of operational systems.
more »
« less
Expanding Human Response to Automation Failures to Sociotechnical Systems
Skraaning and Jamieson raise some interesting issues related to the response of humans to automation failures and offer a taxonomy of failure types that broadens its definition. In this commentary a further attempt to broaden the scope of automation failures is made that places failures within a sociotechnical system of multiple humans and multiple machine components including automation. A suggestion of how one might understand the system’s response to automation failures is offered and the inclusion of autonomy is raised as another complication.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10515616
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Journal of Cognitive Engineering and Decision Making
- ISSN:
- 1555-3434
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Resilient teams overcome sudden, dynamic changes by enacting rapid, adaptive responses that maintain system effectiveness. We analyzed two experiments on human-autonomy teams (HATs) operating a simulated remotely piloted aircraft system (RPAS) and correlated dynamical measures of resilience with measures of team performance. Across both experiments, HATs experienced automation and autonomy failures, using a Wizard of Oz paradigm. Team performance was measured in multiple ways, using a mission-level performance score, a target processing efficiency score, a failure overcome score, and a ground truth resilience score. Novel dynamical systems metrics of resilience measured the timing of system reorganization in response to failures across RPAS layers, including vehicle, controls, communications layers, and the system overall. Time to achieve extreme values of reorganization and novelty of reorganization were consistently correlated with target processing efficiency and ground truth resilience across both studies. Correlations with mission-level performance and the overcome score were apparent but less consistent. Across both studies, teams displayed greater system reorganization during failures compared to routine task conditions. The second experiment revealed differential effects of team training focused on coordination coaching and trust calibration. These results inform the measurement and training of resilience in HATs using objective, real-time resilience analysis.more » « less
-
ObjectiveWe examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BackgroundMost existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. MethodSeventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. ResultsOutcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. ConclusionHuman operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. ApplicationUnderstanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.more » « less
-
This study examined the impact of experience on individuals’ dependence behavior and response strategies when interacting with imperfect automation. 41 participants used an automated aid to complete a dual-task scenario comprising of a compensatory tracking task and a threat detection task. The entire experiment was divided into four quarters and multi-level models (MLM) were built to investigate the relationship between experience and the dependent variables. Results show that compliance and reliance behaviors and perfor- mance scores significantly increased as participants gained more experience with automation. In addition, as the experiment progressed, a significant number of participants adapted to the automation and resorted to an extreme use response strategy. The findings of this study suggest that automation response strategies are not static and most individual operators eventually follow or discard the automation. Understanding individual response strategies can support the development of individualized automation systems and improve operator training.more » « less
-
Extended exposure to reliable automation may lead to overreliance as evidenced by poor responses to auto-mation errors. Individual differences in trust may also influence responses. We investigated how these factors affect response to automation errors in a driving simulator study comprised of stop-controlled and uncon-trolled intersections. Drivers experienced reliable vehicle automation during six drives where they indicated if they felt the automation was going too slow or too fast by pressing the accelerator or brake pedal. Engage-ment via pedal presses did not affect the automation but offered an objective measure of trust in automation. In the final drive, an error occurred where the vehicle failed to stop at a stop-controlled intersection. Drivers’ response to the error was inferred from brake presses. Mixture models showed bimodal response times and revealed that drivers with high trust were less likely to respond to automation errors than drivers with low trust.more » « less
An official website of the United States government

