skip to main content


Title: Assessing vignetting as a means to reduce VR sickness during amplified head rotations
Redirected and amplified head movements have the potential to provide more natural interaction with virtual environments (VEs) than using controller-based input, which causes large discrepancies between visual and vestibular self-motion cues and leads to increased VR sickness. However, such amplified head movements may also exacerbate VR sickness symptoms over no amplification. Several general methods have been introduced to reduce VR sickness for controller-based input inside a VE, including a popular vignetting method that gradually reduces the field of view. In this paper, we investigate the use of vignetting to reduce VR sickness when using amplified head rotations instead of controllerbased input. We also investigate whether the induced VR sickness is a result of the user’s head acceleration or velocity by introducing two different modes of vignetting, one triggered by acceleration and the other by velocity. Our dependent measures were pre and post VR sickness questionnaires as well as estimated discomfort levels that were assessed each minute of the experiment. Our results show interesting effects between a baseline condition without vignetting, as well as the two vignetting methods, generally indicating that the vignetting methods did not succeed in reducing VR sickness for most of the participants and, instead, lead to a significant increase. We discuss the results and potential explanations of our findings.  more » « less
Award ID(s):
1564065
NSF-PAR ID:
10105867
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Symposium on Applied Perception 2018
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This literature review examines the existing research into cybersickness reduction with regards to head mounted display use. Cybersickness refers to a collection of negative symptoms sometimes experienced as the result of being immersed in a virtual environment, such as nausea, dizziness, or eye strain. These symptoms can prevent individuals from utilizing virtual reality (VR) technologies, so discovering new methods of reducing them is critical. Our objective in this literature review is to provide a better picture of what cybersickness reduction techniques exist, the quantity of research demonstrating their effectiveness, and the virtual scenes testing has taken place in. This will help to direct researches towards promising avenues, and illuminate gaps in the literature. Following the preferred reporting items for systematic reviews and meta-analyses statement, we obtained a batch of 1,055 papers through the use of software aids. We selected 88 papers that examine potential cybersickness reduction approaches. Our acceptance criteria required that papers examined malleable conditions that could be conceivably modified for everyday use, examined techniques in conjunction with head mounted displays, and compared cybersickness levels between two or more user conditions. These papers were sorted into categories based on their general approach to combating cybersickness, and labeled based on the presence of statistically significant results, the use of virtual vehicles, the level of visual realism, and the virtual scene contents used in evaluation of their effectiveness. In doing this we have created a snapshot of the literature to date so that researchers may better understand what approaches are being researched, and the types of virtual experiences used in their evaluation. Keywords: Virtual reality cybersickness Simulator Sickness Visually induced motion sickness reduction Systematic review Head mounted display. 
    more » « less
  2. Abstract

    Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.

     
    more » « less
  3. The goal of this research is to provide much needed empirical data on how the fidelity of popular hand gesture tracked based pointing metaphors versus commodity controller based input affects the efficiency and speed-accuracy tradeoff in users’ spatial selection in personal space interactions in VR. We conduct two experiments in which participants select spherical targets arranged in a circle in personal space, or near-field within their maximum arms reach distance, in VR. Both experiments required participants to select the targets with either a VR controller or with their dominant hand’s index finger, which was tracked with one of two popular contemporary tracking methods. In the first experiment, the targets are arranged in a flat circle in accordance with the ISO 9241-9 Fitts’ law standard, and the simulation selected random combinations of 3 target amplitudes and 3 target widths. Targets were placed centered around the users’ eye level, and the arrangement was placed at either 60%, 75%, or 90% depth plane of the users’ maximum arm’s reach. In experiment 2, the targets varied in depth randomly from one depth plane to another within the same configuration of 13 targets within a trial set, which resembled button selection task in hierarchical menus in differing depth planes in the near-field. The study was conducted using the HTC Vive head-mounted display, and used either a VR controller (HTC Vive), low-fidelity virtual pointing (Leap Motion), or a high-fidelity virtual pointing (tracked VR glove) conditions. Our results revealed that low-fidelity pointing performed worse than both high-fidelity pointing and the VR controller. Overall, target selection performance was found to be worse in depth planes closer to the maximum arms reach, as compared to middle and nearer distances. 
    more » « less
  4. null (Ed.)
    Third-person is a popular perspective for video games, but virtual reality (VR) seems to be primarily experienced from a first-person point of view (POV). While a first-person POV generally offers the highest presence; a third-person POV allows users to see their avatar; which allows for a better bond, and the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel thirdperson locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to an avatar, the user will always keep facing the camera which optimizes skeletal tracking and keeps required instrumentation minimal (1 depth camera). A user study compares the performance, usability, VR sickness incidence and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment. 
    more » « less
  5. Virtual reality sickness typically results from visual-vestibular conflict. Because self-motion from optical flow is driven most strongly by motion at the periphery of the retina, reducing the user’s field-of-view (FOV) during locomotion has proven to be an effective strategy to minimize visual vestibular conflict and VR sickness. Current FOV restrictor implementations reduce the user’s FOV by rendering a restrictor whose center is fixed at the center of the head mounted display (HMD), which is effective when the user’s eye gaze is aligned with head gaze. However, during eccentric eye gaze, users may look at the FOV restrictor itself, exposing them to peripheral optical flow which could lead to increased VR sickness. To address these limitations, we develop a foveated FOV restrictor and we explore the effect of dynamically moving the center of the FOV restrictor according to the user’s eye gaze position. We conducted a user study (n=22) where each participant uses a foveated FOV restrictor and a head-fixed FOV restrictor while navigating a virtual environment. We found no statistically significant difference in VR sickness measures or noticeability between both restrictors. However, there was a significant difference in eye gaze behavior, as measured by eye gaze dispersion, with the foveated FOV restrictor allowing participants to have a wider visual scan area compared to the head-fixed FOV restrictor, which confined their eye gaze to the center of the FOV. 
    more » « less