skip to main content


Title: A mixed reality system combining augmented reality, 3D bio-printed physical environments and inertial measurement unit sensors for task planning
Abstract

Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

 
more » « less
Award ID(s):
1946456
NSF-PAR ID:
10399915
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Virtual Reality
Volume:
27
Issue:
3
ISSN:
1359-4338
Page Range / eLocation ID:
p. 1845-1858
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45°and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.

     
    more » « less
  2. Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR. 
    more » « less
  3. Immersive Analytics (IA) and consumer adoption of augmented reality (AR) and virtual reality (VR) head-mounted displays (HMDs) are both rapidly growing. When used in conjunction, stereoscopic IA environments can offer improved user understanding and engagement; however, it is unclear how the choice of stereoscopic display impacts user interactions within an IA environment. This paper presents a pilot study that examines the impact of stereoscopic display choice on object manipulation and environmental navigation using consumeravailable AR and VR HMDs. Our observations indicate that the display can impact how users manipulate virtual content and how they navigate the environment. 
    more » « less
  4. In this paper, we explore how a familiarly shaped object can serve as a physical proxy to manipulate virtual objects in Augmented Reality (AR) environments. Using the example of a tangible, handheld sphere, we demonstrate how irregularly shaped virtual objects can be selected, transformed, and released. After a brief description of the implementation of the tangible proxy, we present a buttonless interaction technique suited to the characteristics of the sphere. In a user study (N = 30), we compare our approach with three different controller-based methods that increasingly rely on physical buttons. As a use case, we focused on an alignment task that had to be completed in mid-air as well as on a flat surface. Results show that our concept has advantages over two of the controller-based methods regarding task completion time and user ratings. Our findings inform research on integrating tangible interaction into AR experiences. 
    more » « less
  5. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability. 
    more » « less