Virtual laboratory utilization has been trending in STEM undergraduate curricula for over twenty years. A virtual laboratory is an interactive computer simulation that mimics real-world laboratory experiences in silico. Virtual labs are cost-effective pedagogical options for academic institutions that lack adequate funding for physical infrastructure and instrumentation. Virtual labs are an excellent proxy for lab activities threatening individual safety and public health. Further, during the COVID-19 pandemic, virtual labs were the primary pedagogical strategy for laboratory instruction. STEM faculty have developed numerous techniques for incorporating virtual labs into classroom and laboratory activities. New technology like artificial intelligence will expand virtual lab usability and effectiveness. Educational research demonstrates positive student outcomes and other benefits from virtual lab engagement. Continued effective mixed-methods research and production of essential virtual lab-based evaluation materials, such as discipline-specific rubrics, are needed to advance the application of this vital technology further. Moreover, from a software development perspective, many more virtual laboratories are needed in technology, engineering, mathematics, and specialized scientific fields.
more »
« less
This content will become publicly available on November 18, 2025
Developing scalable hands-on virtual and mixed-reality science labs
Recent innovations in virtual and mixed-reality (VR/MR) technologies have enabled innovative hands-on training applications in high-risk/high-value fields such as medicine, flight, and worker-safety. Here, we present a detailed description of a novel VR/MR tactile user interactions/interface (TUI) hardware and software development framework that enables the rapid and cost-effective no-code development, optimization, and distribution of fully authentic hands-on VR/MR laboratory training experiences in the physical and life sciences. We applied our framework to the development and optimization of an introductory pipette calibration activity that is often carried out in real chemistry and biochemistry labs. Our approach provides users with nuanced real-time feedback on both their psychomotor skills during data acquisition and their attention to detail when conducting data analysis procedures. The cost-effectiveness of our approach relative to traditional face-to-face science labs improves access to quality hands-on science lab experiences. Importantly, the no-code nature of this Hands-On Virtual-Reality (HOVR) Lab platform enables faculties to iteratively optimize VR/MR experiences to meet their student’s targeted needs without costly software development cycles. Our platform also accommodates TUIs using either standard virtual-reality controllers (VR TUI mode) or fully functional hand-held physical lab tools (MR TUI mode). In the latter case, physical lab tools are strategically retrofitted with optical tracking markers to enable tactile, experimental, and analytical authenticity scientific experimentation. Preliminary user study data highlights the strengths and weaknesses of our generalized approach regarding student affective and cognitive student learning outcomes.
more »
« less
- Award ID(s):
- 1918045
- PAR ID:
- 10557067
- Publisher / Repository:
- Springer Nature
- Date Published:
- Journal Name:
- Virtual Reality
- Volume:
- 28
- Issue:
- 4
- ISSN:
- 1434-9957
- Subject(s) / Keyword(s):
- No-code Mixed-reality Virtual-reality STEM education Science labs Optical tracking Multidisciplinary uses of XR
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Fluid power systems can be expensive and difficult to access, making it challenging to provide hands-on training. This work discusses the incorporation of Mixed Reality (MR) technology in Fluid Power applications for providing a virtual training environment that simulates the behavior of fluid power systems, allowing users to receive immediate feedback on the system’s performance. Mixed reality is a digitized-based technology that integrates a virtual environment with our real world by utilizing real-world sensor data and computer models. This technology allows running simulations that examine the complexity of highly-coupled systems, producing new digital environments where physical and digital elements can interact in real-time. With all these features, MR technology can be a practical training tool for running virtual simulations that mimic real-life industry settings. It can extend the user with a virtual training environment, thus preparing the next generation of fluid power engineers and specialists. Throughout this work, we present the development and capabilities of a digitized virtual copy of a hydraulic excavator’s arm in an MR environment as a proof of concept. The MR arm module is developed and deployed using Microsoft’s Mixed Reality Tool Kit (MRTK) for Unity through HoloLens 2 MR headset. The MR development involves generating virtual copies of the mechanical and hydraulic subsystems, conducting the virtual assembly, and creating a user interface in the MR environment to visualize and interact with the model. The developed MR module enables visualizing the excavator’s internal structure, conducting the virtual assembly, and running virtual simulations, all of which assist in training future fluid power operators. It is an effective training tool that helps train junior engineers/technicians, cutting down on cost and time.more » « less
-
Abstract Recent immersive mixed reality (MR) and virtual reality (VR) displays enable users to use their hands to interact with both veridical and virtual environments simultaneously. Therefore, it becomes important to understand the performance of human hand-reaching movement in MR. Studies have shown that different virtual environment visualization modalities can affect point-to-point reaching performance using a stylus, but it is not yet known if these effects translate to direct human-hand interactions in mixed reality. This paper focuses on evaluating human point-to-point motor performance in MR and VR for both finger-pointing and cup-placement tasks. Six performance measures relevant to haptic interface design were measured for both tasks under several different visualization conditions (“MR with indicator,” “MR without indicator,” and “VR”) to determine what factors contribute to hand-reaching performance. A key finding was evidence of a trade-off between reaching “motion confidence” measures (indicated by throughput, number of corrective movements, and peak velocity) and “accuracy” measures (indicated by end-point error and initial movement error). Specifically, we observed that participants tended to be more confident in the “MR without Indicator” condition for finger-pointing tasks. These results contribute critical knowledge to inform the design of VR/MR interfaces based on the application's user performance requirements.more » « less
-
null (Ed.)Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts.more » « less
-
Photorealistic avatars have become essential for immersive applications in virtual reality (VR) and augmented reality (AR), enabling lifelike interactions in areas such as training simulations, telemedicine, and virtual collaboration. These avatars bridge the gap between the physical and digital worlds, improving the user experience through realistic human representation. However, existing avatar creation techniques face significant challenges, including high costs, long creation times, and limited utility in virtual applications. Manual methods, such as MetaHuman, require extensive time and expertise, while automatic approaches, such as NeRF-based pipelines often lack efficiency, detailed facial expression fidelity, and are unable to be rendered at a speed sufficent for real-time applications. By involving several cutting-edge modern techniques, we introduce an end-to-end 3D Gaussian Splatting (3DGS) avatar creation pipeline that leverages monocular video input to create a scalable and efficient photorealistic avatar directly compatible with the Unity game engine. Our pipeline incorporates a novel Gaussian splatting technique with customized preprocessing that enables the user of ”in the wild” monocular video capture, detailed facial expression reconstruction and embedding within a fully rigged avatar model. Additionally, we present a Unity-integrated Gaussian Splatting Avatar Editor, offering a user-friendly environment for VR/AR application development. Experimental results validate the effectiveness of our preprocessing pipeline in standardizing custom data for 3DGS training and demonstrate the versatility of Gaussian avatars in Unity, highlighting the scalability and practicality of our approach.more » « less
An official website of the United States government
