skip to main content


Search for: All records

Creators/Authors contains: "Yang, X Jessie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 15, 2025
  2. Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time. 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  3. This study examined the impact of experience on individuals’ dependence behavior and response strategies when interacting with imperfect automation. 41 participants used an automated aid to complete a dual-task scenario comprising of a compensatory tracking task and a threat detection task. The entire experiment was divided into four quarters and multi-level models (MLM) were built to investigate the relationship between experience and the dependent variables. Results show that compliance and reliance behaviors and perfor- mance scores significantly increased as participants gained more experience with automation. In addition, as the experiment progressed, a significant number of participants adapted to the automation and resorted to an extreme use response strategy. The findings of this study suggest that automation response strategies are not static and most individual operators eventually follow or discard the automation. Understanding individual response strategies can support the development of individualized automation systems and improve operator training.

     
    more » « less
  4. Objective

    We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation.

    Background

    Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time.

    Method

    Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale.

    Results

    Outcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes.

    Conclusion

    Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases.

    Application

    Understanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.

     
    more » « less
  5. Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. 
    more » « less
  6. We conducted a meta-analysis to determine how people blindly comply with, rely on, and depend on diagnostic automation. We searched three databases using combinations of human behavior keywords with automation keywords. The period ranges from January 1996 to June 2021. In total, 8 records and a total of 68 data points were identified. As data points were nested within research records, we built multi-level models (MLM) to quantify relationships between blind compliance and positive predictive value (PPV), blind reliance and negative predictive value (NPV), and blind dependence and overall success likelihood (OSL).Results show that as the automation’s PPV, NPV, and OSL increase, human operators are more likely to blindly follow the automation’s recommendation. Operators appear to adjust their reliance behaviors more than their compliance and dependence. We recommend that researchers report specific automation trial information (i.e., hits, false alarms, misses, and correct rejections) and human behaviors (compliance and reliance) rather than automation OSL and dependence. Future work could examine how operator behaviors change when operators are not blind to raw data. Researchers, designers, and engineers could leverage understanding of operator behaviors to inform training procedures and to benefit individual operators during repeated automation use. 
    more » « less