The development of autonomous vehicles presents significant challenges, particularly in predicting pedestrian behaviors. This study addresses the critical issue of uncertainty in such predictions by distinguishing between aleatoric (intrinsic randomness) and epistemic (knowledge limitations) uncertainties. Using evidential deep learning (EDL) techniques, we analyze these uncertainties in two key pedestrian behaviors: road crossing and short-term movement prediction. Our findings indicate that epistemic uncertainty is consistently higher than aleatoric uncertainty, highlighting the greater difficulty in predicting pedestrian actions due to limited information. Additionally, both types of uncertainties are more pronounced in crossing predictions compared to destination predictions, underscoring the complexity of future-oriented behaviors. These insights emphasize the necessity for AV algorithms to account for different levels of behavior-related uncertainties, ultimately enhancing the safety and efficiency of autonomous driving systems. This research contributes to a deeper understanding of pedestrian behavior prediction and lays the groundwork for future studies to explore scenario-specific uncertainty factors.
more »
« less
Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing
This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver’s seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efcacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the efects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a signifcant efect on time to decide. This work contributes the frst empirical work on using human-like visual embodiments for AV HMIs.
more »
« less
- Award ID(s):
- 1800961
- PAR ID:
- 10275677
- Date Published:
- Journal Name:
- Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1 to 7
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Accessible pedestrian signal was proposed as a mean to achieve the same level of service that is set forth by the Americans with Disabilities Act for the visually impaired. One of the major issues of existing accessible pedestrian signals is the failure to deliver adequate crossing information for the visually impaired. This article presents a mobile-based accessible pedestrian signal application, namely, Virtual Guide Dog. Integrating intersection information and onboard sensors (e.g. GPS, compass, accelerometer, and gyroscope sensor) of modern smartphones, the Virtual Guide Dog application can notify the visually impaired: (1) the close proximity of an intersection and (2) the street information for crossing. By employing a screen tapping interface, Virtual Guide Dog can remotely place a pedestrian crossing call to the controller, without the need of using a pushbutton. In addition, Virtual Guide Dog informs VIs the start of a crossing phase using text-to-speech technology. The proof-of-concept test shows that Virtual Guide Dog keeps the users informed about the remaining distance as they are approaching the intersection. It was also found that the GPS-only mode is accompanied by greater distance deviation compared to the mode jointly operating with both GPS and cellular positioning.more » « less
-
The immersion of virtual reality (VR) can impact user perceptions in numerous forms, even racial bias and embodied experiences. These effects are often limited to head-mounted displays (HMDs) and other immersive technologies that may not be inclusive to the general population. This paper investigates racial bias and embodiment on a less immersive but more accessible medium: desktop VR. A population of participants (n = 158) participated in a desktop simulation where they embodied a virtual avatar and interacted with virtual humans to determine if desktop embodiment is induced and if there is a resulting effect on racial bias. Our results indicate that desktop embodiment can be induced at low levels, as measured by an embodiment questionnaire. Furthermore, one’s implicit bias may actually influence embodiment, and the experience and perceptions of a desktop VR simulation can be improved through embodied avatars. We discuss these findings and their implications in the context of stereotype activation and existing literature in embodiment.more » « less
-
Patterns of crowd behavior are believed to result from local interactions between pedestrians. Many studies have investigated the local rules of interaction, such as steering, avoiding, and alignment, but how pedestrians control their walking speed when following another remains unsettled. Most pedestrian models assume the physical speed and distance of others as input. The present study compares such “omniscient” models with “visual” models based on optical variables.We experimentally tested eight speed control models from the pedestrian- and car-following literature. Walking participants were asked to follow a leader (a moving pole) in a virtual environment, while the leader’s speed was perturbed during the trial. In Experiment 1, the leader’s initial distance was varied. Each model was fit to the data and compared. The results showed that visual models based on optical expansion (θ˙) had the smallest root mean square error in speed across conditions, whereas other models exhibited increased error at longer distances. In Experiment 2, the leader’s size (pole diameter) was varied. A model based on the relative rate of expansion (θ˙/θ) performed better than the expansion rate model (θ˙), because it is less sensitive to leader size. Together, the results imply that pedestrians directly control their walking speed in one-dimensional following using relative rate of expansion, rather than the distal speed and distance of the leader.more » « less
-
null (Ed.)In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions.more » « less
An official website of the United States government

