skip to main content


Title: A mixed reality system combining augmented reality, 3D bio-printed physical environments and inertial measurement unit sensors for task planning
Abstract

Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

 
more » « less
NSF-PAR ID:
10399915
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Virtual Reality
Volume:
27
Issue:
3
ISSN:
1359-4338
Page Range / eLocation ID:
p. 1845-1858
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45°and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.

     
    more » « less
  2. Holographic near-eye displays promise unprecedented capabilities for virtual and augmented reality (VR/AR) systems. The image quality achieved by current holographic displays, however, is limited by the wave propagation models used to simulate the physical optics. We propose a neural network-parameterized plane-to-multiplane wave propagation model that closes the gap between physics and simulation. Our model is automatically trained using camera feedback and it outperforms related techniques in 2D plane-to-plane settings by a large margin. Moreover, it is the first network-parameterized model to naturally extend to 3D settings, enabling high-quality 3D computer-generated holography using a novel phase regularization strategy of the complex-valued wave field. The efficacy of our approach is demonstrated through extensive experimental evaluation with both VR and optical see-through AR display prototypes. 
    more » « less
  3. Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR. 
    more » « less
  4. Unmanned aerial vehicles (UAV) enable detailed historical preservation of large-scale infrastructure and contribute to cultural heritage preservation, improved maintenance, public relations, and development planning. Aerial and terrestrial photo data coupled with high accuracy GPS create hyper-realistic mesh and texture models, high resolution point clouds, orthophotos, and digital elevation models (DEMs) that preserve a snapshot of history. A case study is presented of the development of a hyper-realistic 3D model that spans the complex 1.7 km2 area of the Brigham Young University campus in Provo, Utah, USA and includes over 75 significant structures. The model leverages photos obtained during the historic COVID-19 pandemic during a mandatory and rare campus closure and details a large scale modeling workflow and best practice data acquisition and processing techniques. The model utilizes 80,384 images and high accuracy GPS surveying points to create a 1.65 trillion-pixel textured structure-from-motion (SfM) model with an average ground sampling distance (GSD) near structures of 0.5 cm and maximum of 4 cm. Separate model segments (31) taken from data gathered between April and August 2020 are combined into one cohesive final model with an average absolute error of 3.3 cm and a full model absolute error of <1 cm (relative accuracies from 0.25 cm to 1.03 cm). Optimized and automated UAV techniques complement the data acquisition of the large-scale model, and opportunities are explored to archive as-is building and campus information to enable historical building preservation, facility maintenance, campus planning, public outreach, 3D-printed miniatures, and the possibility of education through virtual reality (VR) and augmented reality (AR) tours. 
    more » « less
  5. Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments. 
    more » « less