skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: MuVR: A Multiuser Virtual Reality Framework for Unity
Due to the rapidly evolving nature of the Virtual Reality field, many frameworks for multiuser interaction have become outdated, with few (if any) designed to support mixed virtual and non-virtual interactions. We have developed a framework that lays an exten- sible and forward-looking foundation for mixed interactions based upon a novel method of ensuring that inputs, visuals, and networking can all communicate without needing to understand the others’ internals. We tested this framework in the development of several applications and proved that it can easily be adapted to support application requirements it was not originally designed for.  more » « less
Award ID(s):
2148788
PAR ID:
10398473
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
EPiC Series in Computing
Volume:
88
ISSN:
2398-7340
Page Range / eLocation ID:
61 to 50
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in HRI research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this article, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the past 30 years as well as the current directions VAM-HRI is headed in the coming decade. 
    more » « less
  2. As social VR applications grow in popularity, blind and low vi- sion users encounter continued accessibility barriers. Yet social VR, which enables multiple people to engage in the same virtual space, presents a unique opportunity to allow other people to support a user’s access needs. To explore this opportunity, we designed a framework based on physical sighted guidance that enables a guide to support a blind or low vision user with navigation and visual interpretation. A user can virtually hold on to their guide and move with them, while the guide can describe the environment. We studied the use of our framework with 16 blind and low vision participants and found that they had a wide range of preferences. For example, we found that participants wanted to use their guide to support social interactions and establish a human connection with a human-appearing guide. We also highlight opportunities for novel guidance abilities in VR, such as dynamically altering an inaccessible environment. Through this work, we open a novel design space for a versatile approach for making VR fully accessible 
    more » « less
  3. Abstract Preparing preservice teachers (PSTs) to be able to notice, interpret, respond to and orchestrate student ideas—the core practices of responsive teaching—is a key goal for contemporary science and mathematics teacher education. This mixed‐methods study, employing a virtual reality (VR)‐supported simulation integrated with artificial intelligence (AI)‐powered virtual students, explored the frequent patterns of PSTs' talk moves as they attempted to orchestrate a responsive discussion, as well as the affordances and challenges of leveraging AI‐supported virtual simulation to enhance PSTs' responsive teaching skills. Sequential analysis of the talk moves of both PSTs (n = 24) and virtual students indicated that although PSTs did employ responsive talk moves, they encountered difficulties in transitioning from the authoritative, teacher‐centred teaching approach to a responsive way of teaching. The qualitative analysis with triangulated dialogue transcripts, observational field notes and semi‐structured interviews revealed participants' engagement in (1) orchestrating discussion by leveraging the design features of AI‐supported simulation, (2) iterative rehearsals through naturalistic and contextualized interactions and (3) exploring realism and boundaries in AI‐powered virtual students. The study findings provide insights into the potential of leveraging AI‐supported virtual simulation to improve PSTs' responsive teaching skills. The study also underscores the need for PSTs to engage in well‐designed pedagogical practices with adaptive and in situ support. Practitioner notesWhat is already known about this topicDeveloping the teaching capacity of responsive teaching is an important goal for preservice teacher (PST) education. PSTs need systematic opportunities to build fluency in this approach.Virtual simulations can provide PSTs with the opportunities to practice interactive teaching and have been shown to improve their teaching skills.Artificial intelligence (AI)‐powered virtual students can be integrated into virtual simulations to enable interactive and authentic practice of teaching.What this paper addsAI‐supported simulation has the potential to support PSTs' responsive teaching skills.While PSTs enact responsive teaching talk moves, they struggle to enact those talk moves in challenging teaching scenarios due to limited epistemic and pedagogical resources.AI‐supported simulation affords iterative and contextualized opportunities for PSTs to practice responsive teaching talk moves; it challenges teachers to analyse student discourse and respond in real time.Implications for practice and/or policyPSTs should build a teaching repertoire with both basic and advanced responsive talk moves.The learning module should adapt to PSTs' prior experience and provide PSTs with in situ learning support to navigate challenging teaching scenarios.Integrating interaction features and AI‐based virtual students into the simulation can facilitate PSTs' active participation. 
    more » « less
  4. Abstract Recent immersive mixed reality (MR) and virtual reality (VR) displays enable users to use their hands to interact with both veridical and virtual environments simultaneously. Therefore, it becomes important to understand the performance of human hand-reaching movement in MR. Studies have shown that different virtual environment visualization modalities can affect point-to-point reaching performance using a stylus, but it is not yet known if these effects translate to direct human-hand interactions in mixed reality. This paper focuses on evaluating human point-to-point motor performance in MR and VR for both finger-pointing and cup-placement tasks. Six performance measures relevant to haptic interface design were measured for both tasks under several different visualization conditions (“MR with indicator,” “MR without indicator,” and “VR”) to determine what factors contribute to hand-reaching performance. A key finding was evidence of a trade-off between reaching “motion confidence” measures (indicated by throughput, number of corrective movements, and peak velocity) and “accuracy” measures (indicated by end-point error and initial movement error). Specifically, we observed that participants tended to be more confident in the “MR without Indicator” condition for finger-pointing tasks. These results contribute critical knowledge to inform the design of VR/MR interfaces based on the application's user performance requirements. 
    more » « less
  5. Abstract Fluid power systems can be expensive and difficult to access, making it challenging to provide hands-on training. This work discusses the incorporation of Mixed Reality (MR) technology in Fluid Power applications for providing a virtual training environment that simulates the behavior of fluid power systems, allowing users to receive immediate feedback on the system’s performance. Mixed reality is a digitized-based technology that integrates a virtual environment with our real world by utilizing real-world sensor data and computer models. This technology allows running simulations that examine the complexity of highly-coupled systems, producing new digital environments where physical and digital elements can interact in real-time. With all these features, MR technology can be a practical training tool for running virtual simulations that mimic real-life industry settings. It can extend the user with a virtual training environment, thus preparing the next generation of fluid power engineers and specialists. Throughout this work, we present the development and capabilities of a digitized virtual copy of a hydraulic excavator’s arm in an MR environment as a proof of concept. The MR arm module is developed and deployed using Microsoft’s Mixed Reality Tool Kit (MRTK) for Unity through HoloLens 2 MR headset. The MR development involves generating virtual copies of the mechanical and hydraulic subsystems, conducting the virtual assembly, and creating a user interface in the MR environment to visualize and interact with the model. The developed MR module enables visualizing the excavator’s internal structure, conducting the virtual assembly, and running virtual simulations, all of which assist in training future fluid power operators. It is an effective training tool that helps train junior engineers/technicians, cutting down on cost and time. 
    more » « less