skip to main content

Title: A mixed reality system combining augmented reality, 3D bio-printed physical environments and inertial measurement unit sensors for task planning

Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that more » same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

« less
; ;
Publication Date:
Journal Name:
Virtual Reality
Springer Science + Business Media
Sponsoring Org:
National Science Foundation
More Like this
  1. Holographic near-eye displays promise unprecedented capabilities for virtual and augmented reality (VR/AR) systems. The image quality achieved by current holographic displays, however, is limited by the wave propagation models used to simulate the physical optics. We propose a neural network-parameterized plane-to-multiplane wave propagation model that closes the gap between physics and simulation. Our model is automatically trained using camera feedback and it outperforms related techniques in 2D plane-to-plane settings by a large margin. Moreover, it is the first network-parameterized model to naturally extend to 3D settings, enabling high-quality 3D computer-generated holography using a novel phase regularization strategy of the complex-valued wave field. The efficacy of our approach is demonstrated through extensive experimental evaluation with both VR and optical see-through AR display prototypes.
  2. Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR.
  3. Unmanned aerial vehicles (UAV) enable detailed historical preservation of large-scale infrastructure and contribute to cultural heritage preservation, improved maintenance, public relations, and development planning. Aerial and terrestrial photo data coupled with high accuracy GPS create hyper-realistic mesh and texture models, high resolution point clouds, orthophotos, and digital elevation models (DEMs) that preserve a snapshot of history. A case study is presented of the development of a hyper-realistic 3D model that spans the complex 1.7 km2 area of the Brigham Young University campus in Provo, Utah, USA and includes over 75 significant structures. The model leverages photos obtained during the historic COVID-19 pandemic during a mandatory and rare campus closure and details a large scale modeling workflow and best practice data acquisition and processing techniques. The model utilizes 80,384 images and high accuracy GPS surveying points to create a 1.65 trillion-pixel textured structure-from-motion (SfM) model with an average ground sampling distance (GSD) near structures of 0.5 cm and maximum of 4 cm. Separate model segments (31) taken from data gathered between April and August 2020 are combined into one cohesive final model with an average absolute error of 3.3 cm and a full model absolute error of <1 cm (relative accuraciesmore »from 0.25 cm to 1.03 cm). Optimized and automated UAV techniques complement the data acquisition of the large-scale model, and opportunities are explored to archive as-is building and campus information to enable historical building preservation, facility maintenance, campus planning, public outreach, 3D-printed miniatures, and the possibility of education through virtual reality (VR) and augmented reality (AR) tours.« less
  4. Advisor: Dr. Guillermo Araya (Ed.)
    The present study provides fundamental knowledge on an issue in fluid dynamics that is not well understood: flow separation and its association with heat and contaminant transport. In the separated region, a swirling motion increases the fluid drag force on the object. Very often, this is undesirable because it can seriously reduce the performance of engineered devices such as aircraft and turbines. Furthermore, Computational Fluid Dynamics (CFD) has gained ground due to its relatively low cost, high accuracy, and versatility. The principal aim of this study is to numerically elucidate the details behind momentum and passive scalar transport phenomena during turbulent boundary layer separation resulting from a wall-curvature-driven pressure gradient. With Open- FOAM CFD software, the numerical discretization of Reynolds-Averaged Navier-Stokes and passive scalar transport equations will be described in two-dimensional domains via the assessment of two popular turbulence models (i.e., the Spalart-Allmaras and the K-w SST model). The computational domain reproduces a wind tunnel geometry from previously performed experiments by Baskaran et al. (JFM, vol. 182 and 232 “A turbulent flow over a curved hill.” Part 1 and Part 2). Only the velocity and pressure distribution were measured there, which will be used for validation purposes in the presentmore »study. A second aim in the present work is the scientific visualization of turbulent events and coherent structures via the ParaView toolkit and Unity game engine. Thus, fully immersive visualization approaches will be used via virtual reality (VR) and augmented reality (AR) technologies. A Virtual Wind Tunnel (VWT), developed for the VR approach, emulates the presence in a wind tunnel laboratory and has already employed fluid flow visualization from an existing numerical database with high temporal/spatial resolution, i.e., Direct Numeric Simulation (DNS). In terms of AR, a FlowVisXR app for smartphones and HoloLens has been developed for portability. It allows the user to see virtual 3D objects (i.e., turbulent coherent structures) invoked into the physical world using the device as the lens.« less
  5. Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human–computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.