skip to main content


Title: Spatial Perception in Immersive Visualization: A Study and Findings
Spatial information understanding is fundamental to visual perception in Metaverse. Beyond the stereoscopic visual cues naturally carried in Metaverse, the human vision system may use other auxiliary information provided by any shadow casting or motion parallax available to perceive the 3D virtual world. However, the combined use of shadows and motion parallax to improve 3D perception have not been fully studied. In particular, when visualizing the combination of volumetric data and associated skeleton models in VR, how to provide the auxiliary visual cues to enhance observers’ perception of the structural information is a key yet underexplored topic. This problem is particularly challenging for visualization of data in biomedical research. In this paper, we focus on immersive analytics in neurobiology where the structural information includes the relative position of objects (nuclei / cell body) in the 3D space and the spatial measurement and connectivity of segments (axons and dendrites) in a model. We present a perceptual experiment designed for understanding the consequence of shadow casting and motion parallax in the neuron structures observation and the feedback and analysis of the experiment are reported and discussed.  more » « less
Award ID(s):
2107224
NSF-PAR ID:
10435209
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Symposium on Mixed and Augmented Reality (ISMAR) Workshop on Metaverse & Its Applications
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Flies and other insects use incoherent motion (parallax) to the front and sides to measure distances and identify obstacles during translation. Although additional depth information could be drawn from below, there is no experimental proof that they use it. The finding that blowflies encode motion disparities in their ventral visual fields suggests this may be an important region for depth information. We used a virtual flight arena to measure fruit fly responses to optic flow. The stimuli appeared below ( n = 51) or above the fly ( n = 44), at different speeds, with or without parallax cues. Dorsal parallax does not affect responses, and similar motion disparities in rotation have no effect anywhere in the visual field. But responses to strong ventral sideslip (206° s −1 ) change drastically depending on the presence or absence of parallax. Ventral parallax could help resolve ambiguities in cluttered motion fields, and enhance corrective responses to nearby objects. 
    more » « less
  2. null (Ed.)
    Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building. 
    more » « less
  3. Spatial information understanding is fundamental to visual perception in metaverse. 
    more » « less
  4. Abstract

    A “virtual mirror” is a promising interface for virtual or augmented reality applications in which users benefit from seeing themselves within the environment, such as serious games for rehabilitation exercise or biological education. While there is extensive work analyzing pointing and providing assistance for first-person perspectives, mirrored third-person perspectives have been minimally considered, limiting the quality of user interactions in current virtual mirror applications. We address this gap with two user studies aimed at understanding pointing motions with a mirror view and assessing visual cues that assist pointing. An initial two-phase preliminary study had users tune and test nine different visual aids. This was followed by in-depth testing of the best four of those visual aids compared with unaided pointing. Results give insight into both aided and unaided pointing with this mirrored third-person view, and compare visual cues. We note a pattern of consistently pointing far in front of targets when first introduced to the pointing task, but that initial unaided motion improves after practice with visual aids. We found that the presence of stereoscopy is not sufficient for enhancing accuracy, supporting the use of other visual cues that we developed. We show that users perform pointing differently when pointing behind and in front of themselves. We finally suggest which visual aids are most promising for 3D pointing in virtual mirror interfaces.

     
    more » « less
  5. The Visual Turing Test is the ultimate goal to evaluate the realism of holographic displays. Previous studies have focused on addressing challenges such as limited e ́tendue and image quality over a large focal volume, but they have not investigated the effect of pupil sampling on the viewing experience in full 3D holograms. In this work, we tackle this problem with a novel hologram generation algorithm motivated by matching the projection operators of incoherent (Light Field) and coherent (Wigner Function) light transport. To this end, we supervise hologram computation using synthesized photographs, which are rendered on-the-fly using Light Field refocusing from stochastically sampled pupil states during optimization. The proposed method produces holograms with correct parallax and focus cues, which are important for passing the Visual Turing Test. We validate that our approach compares favorably to state-of-the-art CGH algorithms that use Light Field and Focal Stack supervision. Our experiments demonstrate that our algorithm improves the viewing experience when evaluated under a large variety of different pupil states. 
    more » « less