skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Title: Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing
This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver’s seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efcacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the efects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a signifcant efect on time to decide. This work contributes the frst empirical work on using human-like visual embodiments for AV HMIs.  more » « less
Award ID(s):
1800961
PAR ID:
10275677
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  2. The development of autonomous vehicles presents significant challenges, particularly in predicting pedestrian behaviors. This study addresses the critical issue of uncertainty in such predictions by distinguishing between aleatoric (intrinsic randomness) and epistemic (knowledge limitations) uncertainties. Using evidential deep learning (EDL) techniques, we analyze these uncertainties in two key pedestrian behaviors: road crossing and short-term movement prediction. Our findings indicate that epistemic uncertainty is consistently higher than aleatoric uncertainty, highlighting the greater difficulty in predicting pedestrian actions due to limited information. Additionally, both types of uncertainties are more pronounced in crossing predictions compared to destination predictions, underscoring the complexity of future-oriented behaviors. These insights emphasize the necessity for AV algorithms to account for different levels of behavior-related uncertainties, ultimately enhancing the safety and efficiency of autonomous driving systems. This research contributes to a deeper understanding of pedestrian behavior prediction and lays the groundwork for future studies to explore scenario-specific uncertainty factors.

     
    more » « less
  3. Accessible pedestrian signal was proposed as a mean to achieve the same level of service that is set forth by the Americans with Disabilities Act for the visually impaired. One of the major issues of existing accessible pedestrian signals is the failure to deliver adequate crossing information for the visually impaired. This article presents a mobile-based accessible pedestrian signal application, namely, Virtual Guide Dog. Integrating intersection information and onboard sensors (e.g. GPS, compass, accelerometer, and gyroscope sensor) of modern smartphones, the Virtual Guide Dog application can notify the visually impaired: (1) the close proximity of an intersection and (2) the street information for crossing. By employing a screen tapping interface, Virtual Guide Dog can remotely place a pedestrian crossing call to the controller, without the need of using a pushbutton. In addition, Virtual Guide Dog informs VIs the start of a crossing phase using text-to-speech technology. The proof-of-concept test shows that Virtual Guide Dog keeps the users informed about the remaining distance as they are approaching the intersection. It was also found that the GPS-only mode is accompanied by greater distance deviation compared to the mode jointly operating with both GPS and cellular positioning. 
    more » « less
  4. null (Ed.)
    In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions. 
    more » « less
  5. Patterns of crowd behavior are believed to result from local interactions between pedestrians. Many studies have investigated the local rules of interaction, such as steering, avoiding, and alignment, but how pedestrians control their walking speed when following another remains unsettled. Most pedestrian models assume the physical speed and distance of others as input. The present study compares such “omniscient” models with “visual” models based on optical variables.We experimentally tested eight speed control models from the pedestrian- and car-following literature. Walking participants were asked to follow a leader (a moving pole) in a virtual environment, while the leader’s speed was perturbed during the trial. In Experiment 1, the leader’s initial distance was varied. Each model was fit to the data and compared. The results showed that visual models based on optical expansion (θ˙) had the smallest root mean square error in speed across conditions, whereas other models exhibited increased error at longer distances. In Experiment 2, the leader’s size (pole diameter) was varied. A model based on the relative rate of expansion (θ˙/θ) performed better than the expansion rate model (θ˙), because it is less sensitive to leader size. Together, the results imply that pedestrians directly control their walking speed in one-dimensional following using relative rate of expansion, rather than the distal speed and distance of the leader. 
    more » « less