skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Immersive Distributed Design Through Real-Time Capture, Translation, and Rendering of Three-Dimensional Mesh Data1
With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.  more » « less
Award ID(s):
1650527
PAR ID:
10137112
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
Volume:
17
Issue:
3
ISSN:
1530-9827
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Extreme environments, such as search and rescue missions, defusing bombs, or exploring extraterrestrial planets, are unsafe environments for humans to be in. Robots enable humans to explore and interact in these environments through remote presence and teleoperation and virtual reality provides a medium to create immersive and easy-to-use teleoperation interfaces. However, current virtual reality interfaces are still very limited in their capabilities. In this work, we aim to advance robot teleoperation virtual reality interfaces by developing an environment reconstruction methodology capable of recognizing objects in a robot’s environment and rendering high fidelity models inside a virtual reality headset. We compare our proposed environment reconstruction method against traditional point cloud streaming by having operators plan waypoint trajectories to accomplish a pick-and-place task. Overall, our results show that users find our environment reconstruction method more usable and less cognitive work compared to raw point cloud streaming. 
    more » « less
  2. Abstract PurposeSpecialized robotic and surgical tools are increasing the complexity of operating rooms (ORs), requiring elaborate preparation especially when techniques or devices are to be used for the first time. Spatial planning can improve efficiency and identify procedural obstacles ahead of time, but real ORs offer little availability to optimize space utilization. Methods for creating reconstructions of physical setups, i.e., digital twins, are needed to enable immersive spatial planning of such complex environments in virtual reality. MethodsWe present a neural rendering-based method to create immersive digital twins of complex medical environments and devices from casual video capture that enables spatial planning of surgical scenarios. To evaluate our approach we recreate two operating rooms and ten objects through neural reconstruction, then conduct a user study with 21 graduate students carrying out planning tasks in the resulting virtual environment. We analyze task load, presence, perceived utility, plus exploration and interaction behavior compared to low visual complexity versions of the same environments. ResultsResults show significantly increased perceived utility and presence using the neural reconstruction-based environments, combined with higher perceived workload and exploratory behavior. There’s no significant difference in interactivity. ConclusionWe explore the feasibility of using modern reconstruction techniques to create digital twins of complex medical environments and objects. Without requiring expert knowledge or specialized hardware, users can create, explore and interact with objects in virtual environments. Results indicate benefits like high perceived utility while being technically approachable, which may indicate promise of this approach for spatial planning and beyond. 
    more » « less
  3. null (Ed.)
    Fully immersive virtual reality, with the unique ability to replicate the real world, could potentially aid in real-time communication. Geographically separated teams can collaborate using virtual reality. To test the viability of using virtual reality for remote collaboration, we designed a system called “WeRSort” where teams sorted cards in a virtual environment. Participants performed the task as a team of 2 in one of three conditions-controls-only condition, generic embodiment and full embodiment. Objective measures of performance, time and percentage match with master cards showed no significant difference. Subjective measures of presence and system usability also showed no statistical significance. However, overall work-load obtained from NASA-TLX showed that fully immersive virtual reality resulted in lower workload in comparison with the other two. Qualitative data was collected and analyzed to understand collaboration using the awareness evaluation model. 
    more » « less
  4. Collaborative virtual assembly environment is a vital computer-aided design tool in product design and can be used as a learning and training tool. It helps in supporting complex product design by enabling designers to collaborate and communicate with other designers involved in the product design. This paper proposes a collaborative virtual assembly environment built in two phases for the immersive and non-immersive environments. Phase one was developed in Unity 3D using Virtual Reality Toolkit (VRTK) and Steam VR. Whereas, phase two was built using Vizard and Vizible. This work aims to allow scientists and engineers to discuss the concept design in a real-time VR environment so that they can interact with the objects and review their work before it is deployed. This paper proposes the system architecture and describes the design and implementation of a collaborative virtual assembly environment. The outcome of this work is to be able to resolve communication and interaction problems that arise during the concept-design phase. 
    more » « less
  5. A major challenge in monocular 3D object detection is the limited diversity and quantity of objects in real datasets. While augmenting real scenes with virtual objects holds promise to improve both the diversity and quantity of the objects, it remains elusive due to the lack of an effective 3D object insertion method in complex real captured scenes. In this work, we study augmenting complex real indoor scenes with virtual objects for monocular 3D object detection. The main challenge is to automatically identify plausible physical properties for virtual assets (e.g., locations, appearances, sizes, etc.) in cluttered real scenes. To address this challenge, we propose a physically plausible indoor 3D object insertion approach to automatically copy virtual objects and paste them into real scenes. The resulting objects in scenes have 3D bounding boxes with plausible physical locations and appearances. In particular, our method first identifies physically feasible locations and poses for the inserted objects to prevent collisions with the existing room layout. Subsequently, it estimates spatially-varying illumination for the insertion location, enabling the immersive blending of the virtual objects into the original scene with plausible appearances and cast shadows. We show that our augmentation method significantly improves existing monocular 3D object models and achieves state-of-the-art performance. For the first time, we demonstrate that a physically plausible 3D object insertion, serving as a generative data augmentation technique, can lead to significant improvements for discriminative downstream tasks such as monocular 3D object detection. 
    more » « less