skip to main content


Title: A baseline experiment for the effect of virtual immersion on vehicle passenger risk perception
The goal of this study was to evaluate driver risk behavior in response to changes in their risk perception inputs, specifically focusing on the effect of augmented visual representation technologies. This experiment was conducted for the purely real-driving scenario, establishing a baseline by which future, augmented visual representation scenarios can be compared. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) simulation technologies have rapidly improved over the last three decades to where, today, they are widely used and more heavily relied upon than before, particularly in the areas of training, research, and design. The resulting utilization of these capabilities has proven simulation technologies to be a versatile and powerful tool. Virtual immersion, however, introduces a layer of abstraction and safety between the participant and the designed artifact, which includes an associated risk compensation. Quantifying and modeling the relationship between this risk compensation and levels of virtual immersion is the greater goal of this project. This study focuses on the first step, which is to determine the level of risk perception for a purely real environment for a specific man-machine system - a ground vehicle – operated in a common risk scenario – traversing a curve at high speeds. Specifically, passengers are asked to assess whether the vehicle speed within a constant-radius curve is perceived as comfortable. Due to the potential for learning effects to influence risk perception, the experiment was split into two separate protocols: the latent response protocol and the learned response protocol. The latent response protocol applied to the first exposure of an experimental condition to the subject. It consisted of having the subjects in the passenger seat assess comfort or discomfort within a vehicle that was driven around a curve at a randomlychosen value among a selection of test speeds; subjects were asked to indicate when they felt uncomfortable by pressing a brake pedal that was instrumented to alert the driver. Next, the learned response protocol assessed the subjects for repeated exposures but allowing subjects to use brake and throttle pedals to indicate if they wanted to go faster or slower; the goal was to allow subjects to iterate toward their maximum comfortable speed. These pedals were instrumented to alert the driver who responded accordingly. Both protocols were repeated for a second curve with a different radius. Questionnaires were also administered after each trial that addressed the subjective perception of risk and provided a means to substantiate the measured risk compensation behavior. The results showed that, as expected, the latent perception of risk for a passenger traversing a curve was higher than the learned perception for successive exposures to the same curve; in other words, as drivers ‘learned’ a curve, they were more comfortable with higher speeds. Both the latent and learned speeds provide a suitable metric by which to compare future replications of this experiment at different levels of virtual immersion. Correlations were found between uncomfortable subject responses and the yaw acceleration of the vehicle. Additional correlation of driver discomfort was found to occur at specific locations on the curves. The yaw acceleration is a reflection of the driver’s ability to maintain a steady steering input, whereas the location on the curve was found to correlate with variations in the lane-markings and environmental cues.  more » « less
Award ID(s):
1635663
NSF-PAR ID:
10090402
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Applied Human Factors and Ergonomics (AHFE 2018), "Human Factors in Transportation" Session
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Over the past decade, augmented reality (AR) developers have explored a variety of approaches to allow users to interact with the information displayed on smart glasses and head-mounted displays (HMDs). Current interaction modalities such as mid-air gestures, voice commands, or hand-held controllers provide a limited range of interactions with the virtual content. Additionally, these modalities can also be exhausting, uncomfortable, obtrusive, and socially awkward. There is a need to introduce comfortable interaction techniques for smart glasses and HMDS without the need for visual attention. This paper presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users' interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch-stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. 
    more » « less
  2. null (Ed.)
    We study student experiences of social VR for remote instruction, with students attending class from home. The study evaluates student experiences when: (1) viewing remote lectures with VR headsets, (2) viewing with desktop displays, (3) presenting with VR headsets, and (4) reflecting on several weeks of VR-based class attendance. Students rated factors such as presence, social presence, simulator sickness, communication methods, avatar and application features, and tradeoffs with other remote approaches. Headset-based viewing and presenting produced higher presence than desktop viewing, but had less-clear impact on overall experience and on most social presence measures. We observed higher attentional allocation scores for headset-based presenting than for both viewing methods. For headset VR, there were strong negative correlations between simulator sickness (primarily reported as general discomfort) and ratings of co-presence, overall experience, and some other factors. This suggests that comfortable users experienced substantial benefits of headset viewing and presenting, but others did not. Based on the type of virtual environment, student ratings, and comments, reported discomfort appears related to physical ergonomic factors or technical problems. Desktop VR appears to be a good alternative for uncomfortable students, and students report that they prefer a mix of headset and desktop viewing. We additionally provide insight from students and a teacher about possible improvements for VR class technology, and we summarize student opinions comparing viewing and presenting in VR to other remote class technologies. 
    more » « less
  3. Stationarity perception refers to the ability to accurately perceive the surrounding visual environment as world-fixed during self-motion. Perception of stationarity depends on mechanisms that evaluate the congruence between retinal/oculomotor signals and head movement signals. In a series of psychophysical experiments, we systematically varied the congruence between retinal/oculomotor and head movement signals to find the range of visual gains that is compatible with perception of a stationary environment. On each trial, human subjects wearing a head-mounted display execute a yaw head movement and report whether the visual gain was perceived to be too slow or fast. A psychometric fit to the data across trials reveals the visual gain most compatible with stationarity (a measure of accuracy) and the sensitivity to visual gain manipulation (a measure of precision). Across experiments, we varied 1) the spatial frequency of the visual stimulus, 2) the retinal location of the visual stimulus (central vs. peripheral), and 3) fixation behavior (scene-fixed vs. head-fixed). Stationarity perception is most precise and accurate during scene-fixed fixation. Effects of spatial frequency and retinal stimulus location become evident during head-fixed fixation, when retinal image motion is increased. Virtual Reality sickness assessed using the Simulator Sickness Questionnaire covaries with perceptual performance. Decreased accuracy is associated with an increase in the nausea subscore, while decreased precision is associated with an increase in the oculomotor and disorientation subscores. 
    more » « less
  4. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  5. null (Ed.)
    This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver’s seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efcacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the efects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a signifcant efect on time to decide. This work contributes the frst empirical work on using human-like visual embodiments for AV HMIs. 
    more » « less