skip to main content

Title: Give Me a Hand: Exploring Bidirectional Mutual Embodied Tangible Interaction in Virtual Reality
Virtual reality (VR) systems have been increasingly used in recent years in various domains, such as education and training. Presence, which can be described as ‘the sense of being there’ is one of the most important user experience aspects in VR. There are several components, which may affect the level of presence, such as interaction, visual fidelity, and auditory cues. In recent years, a significant effort has been put into increasing the sense of presence in VR. This study focuses on improving user experience in VR by increasing presence through increased interaction fidelity and enhanced illusions. Interaction in real life includes mutual and bidirectional encounters between two or more individuals through shared tangible objects. However, the majority of VR interaction to date has been unidirectional. This research aims to bridge this gap by enabling bidirectional mutual tangible embodied interactions between human users and virtual characters in world-fixed VR through real-virtual shared objects that extend from virtual world into the real world. I hypothesize that the proposed novel interaction will shrink the boundary between the real and virtual worlds (through virtual characters that affect the physical world), increase the seamlessness of the VR system (enhance the illusion) and the fidelity of more » interaction, and increase the level of presence and social presence, enjoyment and engagement. This paper includes the motivation, design and development details of the proposed novel world-fixed VR system along with future directions. « less
Authors:
Award ID(s):
1850245
Publication Date:
NSF-PAR ID:
10219166
Journal Name:
IEEE International Conference on Consumer Electronics
Sponsoring Org:
National Science Foundation
More Like this
  1. While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward researchmore »towards more realistic physicality in future VR/AR.« less
  2. The popular concepts of Virtual Reality (VR) and Augmented Reality (AR) arose from our ability to interact with objects and environments that appear to be real, but are not. One of the most powerful aspects of these paradigms is the ability of virtual entities to embody a richness of behavior and appearance that we perceive as compatible with reality, and yet unconstrained by reality. The freedom to be or do almost anything helps to reinforce the notion that such virtual entities are inherently distinct from the real world—as if they were magical. This independent magical status is reinforced by the typical need for the use of “magic glasses” (head-worn displays) and “magic wands” (spatial interaction devices) that are ceremoniously bestowed on a chosen few. For those individuals, the experience is inherently egocentric in nature—the sights and sounds effectively emanate from the magic glasses, not the real world, and unlike the magic we are accustomed to from cinema, the virtual entities are unable to affect the real world. This separation of real and virtual is also inherent in our related conceptual frameworks, such as Milgram’s Virtuality Continuum, where the real and virtual are explicitly distinguished and mixed. While these frameworks aremore »indeed conceptual, we often feel the need to position our systems and research somewhere in the continuum, further reinforcing the notion that real and virtual are distinct. The very structures of our professional societies, our research communities, our journals, and our conferences tend to solidify the evolutionary separation of the virtual from the real. However, independent forces are emerging that could reshape our notions of what is real and virtual, and transform our sense of what it means to interact with technology. First, even within the VR/AR communities, as the appearance and behavioral realism of virtual entities improves, virtual experiences will become more real. Second, as domains such as artificial intelligence, robotics, and the Internet of Things (IoT) mature and permeate throughout our lives, experiences with real things will become more virtual. The convergence of these various domains has the potential to transform the egocentric magical nature of VR/AR into more pervasive allocentric magical experiences and interfaces that interact with and can affect the real world. This transformation will blur traditional technological boundaries such that experiences will no longer be distinguished as real or virtual, and our sense for what is natural will evolve to include what we once remember as cinematic magic.« less
  3. Chen, J.Y.C. (Ed.)
    In recent years there has been a sharp increase in active shooter events, but there has been no introduction of new technology or tactics capable of increasing preparedness and training for active shooter events. This has raised a major concern about the lack of tools that would allow robust predictions of realistic human movements and the lack of understanding about the interaction in designated simulation environments. It is impractical to carry out live experiments where thousands of people are evacuated from buildings designed for every possible emergency condition. There has been progress in understanding human movement, human motion synthesis, crowd dynamics, indoor environments, and their relationships with active shooter events, but challenges remain. This paper presents a virtual reality (VR) experimental setup for conducting virtual evacuation drills in response to extreme events and demonstrates the behavior of agents during an active shooter environment. The behavior of agents is implemented using behavior trees in the Unity gaming engine. The VR experimental setup can simulate human behavior during an active shooter event in a campus setting. A presence questionnaire (PQ) was used in the user study to evaluate the effectiveness and engagement of our active shooter environment. The results show that majoritymore »of users agreed that the sense of presence was increased when using the emergency response training environment for a building evacuation environment.« less
  4. Recent developments in the commercialization of virtual reality open up many opportunities for enhancing human interaction with three-dimensional objects and visualizations. Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. In a lab study, we investigate the effects of the perception of actually holding a virtual spherical visualization in hands. As use cases, we focus on surface visualizations that benefit from or require a rounded shape. We compared the usage of two differently sized acrylic glass spheres to a related interaction technique that utilizes VR controllers as proxies. On the one hand, our work is motivated by the ability to create in VR a tangible, lightweight, handheld spherical display that can hardly be realized in reality. On the other hand, gaining insights about the impact of a fully tangible embodiment of a virtual object on task performance, comprehension of patterns, and user behavior is important in its own right. After a description of the implementation we discuss the advantages and disadvantages of our approach, taking into account different handheldmore »spherical displays utilizing outside and inside projection.« less
  5. Detailed hand motions play an important role in face-to-face communication to emphasize points, describe objects, clarify concepts, or replace words altogether. While shared virtual reality (VR) spaces are becoming more popular, these spaces do not, in most cases, capture and display accurate hand motions. In this paper, we investigate the consequences of such errors in hand and finger motions on comprehension, character perception, social presence, and user comfort. We conduct three perceptual experiments where participants guess words and movie titles based on motion captured movements. We introduce errors and alterations to the hand movements and apply techniques to synthesize or correct hand motions. We collect data from more than 1000 Amazon Mechanical Turk participants in two large experiments, and conduct a third experiment in VR. As results might differ depending on the virtual character used, we investigate all effects on two virtual characters of different levels of realism. We furthermore investigate the effects of clip length in our experiments. Amongst other results, we show that the absence of finger motion significantly reduces comprehension and negatively affects people’s perception of a virtual character and their social presence. Adding some hand motions, even random ones, does attenuate some of these effects whenmore »it comes to the perception of the virtual character or social presence, but it does not necessarily improve comprehension. Slightly inaccurate or erroneous hand motions are sufficient to achieve the same level of comprehension as with accurate hand motions. They might however still affect the viewers’ impression of a character. Finally, jittering hand motions should be avoided as they significantly decrease user comfort.« less