skip to main content


Title: Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing
This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver’s seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efcacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the efects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a signifcant efect on time to decide. This work contributes the frst empirical work on using human-like visual embodiments for AV HMIs.  more » « less
Award ID(s):
1800961
NSF-PAR ID:
10275677
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  2. null (Ed.)
    In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions. 
    more » « less
  3. Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time. 
    more » « less
  4. Accessible pedestrian signal was proposed as a mean to achieve the same level of service that is set forth by the Americans with Disabilities Act for the visually impaired. One of the major issues of existing accessible pedestrian signals is the failure to deliver adequate crossing information for the visually impaired. This article presents a mobile-based accessible pedestrian signal application, namely, Virtual Guide Dog. Integrating intersection information and onboard sensors (e.g. GPS, compass, accelerometer, and gyroscope sensor) of modern smartphones, the Virtual Guide Dog application can notify the visually impaired: (1) the close proximity of an intersection and (2) the street information for crossing. By employing a screen tapping interface, Virtual Guide Dog can remotely place a pedestrian crossing call to the controller, without the need of using a pushbutton. In addition, Virtual Guide Dog informs VIs the start of a crossing phase using text-to-speech technology. The proof-of-concept test shows that Virtual Guide Dog keeps the users informed about the remaining distance as they are approaching the intersection. It was also found that the GPS-only mode is accompanied by greater distance deviation compared to the mode jointly operating with both GPS and cellular positioning. 
    more » « less
  5. null (Ed.)
    Abstract Microassembly systems utilizing precision robotics have long been used for realizing three-dimensional microstructures such as microsystems and microrobots. Prior to assembly, microscale components are fabricated using micro-electromechanical-system (MEMS) technology. The microassembly system then directs a microgripper through a series of automated or human-controlled pick-and-place operations. In this paper, we describe a novel custom microassembly system, named NEXUS, that can be used to prototype MEMS microrobots. The NEXUS integrates multi-degrees-of-freedom (DOF) precision positioners, microscope computer vision, and microscale process tools such as a microgripper and vacuum tip. A semi-autonomous human–machine interface (HMI) was programmed to allow the operator to interact with the microassembly system. The NEXUS human–machine interface includes multiple functions, such as positioning, target detection, visual servoing, and inspection. The microassembly system's HMI was used by operators to assemble various three-dimensional microrobots such as the Solarpede, a novel light-powered stick-and-slip mobile microcrawler. Experimental results are reported in this paper to evaluate the system's semi-autonomous capabilities in terms of assembly rate and yield and compare them to purely teleoperated assembly performance. Results show that the semi-automated capabilities of the microassembly system's HMI offer a more consistent assembly rate of microrobot components and are less reliant on the operator's experience and skill. 
    more » « less