skip to main content


Title: A Tangible Spherical Proxy for Object Manipulation in Augmented Reality
In this paper, we explore how a familiarly shaped object can serve as a physical proxy to manipulate virtual objects in Augmented Reality (AR) environments. Using the example of a tangible, handheld sphere, we demonstrate how irregularly shaped virtual objects can be selected, transformed, and released. After a brief description of the implementation of the tangible proxy, we present a buttonless interaction technique suited to the characteristics of the sphere. In a user study (N = 30), we compare our approach with three different controller-based methods that increasingly rely on physical buttons. As a use case, we focused on an alignment task that had to be completed in mid-air as well as on a flat surface. Results show that our concept has advantages over two of the controller-based methods regarding task completion time and user ratings. Our findings inform research on integrating tangible interaction into AR experiences.  more » « less
Award ID(s):
1748392
NSF-PAR ID:
10146524
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2020
Page Range / eLocation ID:
221-229
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent developments in the commercialization of virtual reality open up many opportunities for enhancing human interaction with three-dimensional objects and visualizations. Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. In a lab study, we investigate the effects of the perception of actually holding a virtual spherical visualization in hands. As use cases, we focus on surface visualizations that benefit from or require a rounded shape. We compared the usage of two differently sized acrylic glass spheres to a related interaction technique that utilizes VR controllers as proxies. On the one hand, our work is motivated by the ability to create in VR a tangible, lightweight, handheld spherical display that can hardly be realized in reality. On the other hand, gaining insights about the impact of a fully tangible embodiment of a virtual object on task performance, comprehension of patterns, and user behavior is important in its own right. After a description of the implementation we discuss the advantages and disadvantages of our approach, taking into account different handheld spherical displays utilizing outside and inside projection. 
    more » « less
  2. The emerging possibilities of data analysis and exploration in virtual reality raise the question of how users can be best supported during such interactions. Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. This work is motivated by the prospect to create in VR a low-cost, tangible, robust, handheld spherical display that would be difficult or impossible to implement as a physical display. Our concept enables it to gain insights about the impact of a fully tangible embodiment of a virtual object on task performance, comprehension of patterns, and user behavior. After a description of the implementation we discuss the advantages and disadvantages of our approach, taking into account different handheld spherical displays utilizing outside and inside projection. 
    more » « less
  3. While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR. 
    more » « less
  4. null (Ed.)
    Virtual reality (VR) systems have been increasingly used in recent years in various domains, such as education and training. Presence, which can be described as ‘the sense of being there’ is one of the most important user experience aspects in VR. There are several components, which may affect the level of presence, such as interaction, visual fidelity, and auditory cues. In recent years, a significant effort has been put into increasing the sense of presence in VR. This study focuses on improving user experience in VR by increasing presence through increased interaction fidelity and enhanced illusions. Interaction in real life includes mutual and bidirectional encounters between two or more individuals through shared tangible objects. However, the majority of VR interaction to date has been unidirectional. This research aims to bridge this gap by enabling bidirectional mutual tangible embodied interactions between human users and virtual characters in world-fixed VR through real-virtual shared objects that extend from virtual world into the real world. I hypothesize that the proposed novel interaction will shrink the boundary between the real and virtual worlds (through virtual characters that affect the physical world), increase the seamlessness of the VR system (enhance the illusion) and the fidelity of interaction, and increase the level of presence and social presence, enjoyment and engagement. This paper includes the motivation, design and development details of the proposed novel world-fixed VR system along with future directions. 
    more » « less
  5. An accurate understanding of omnidirectional environment lighting is crucial for high-quality virtual object rendering in mobile augmented reality (AR). In particular, to support reflective rendering, existing methods have leveraged deep learning models to estimate or have used physical light probes to capture physical lighting, typically represented in the form of an environment map. However, these methods often fail to provide visually coherent details or require additional setups. For example, the commercial framework ARKit uses a convolutional neural network that can generate realistic environment maps; however the corresponding reflective rendering might not match the physical environments. In this work, we present the design and implementation of a lighting reconstruction framework called LITAR that enables realistic and visually-coherent rendering. LITAR addresses several challenges of supporting lighting information for mobile AR. First, to address the spatial variance problem, LITAR uses two-field lighting reconstruction to divide the lighting reconstruction task into the spatial variance-aware near-field reconstruction and the directional-aware far-field reconstruction. The corresponding environment map allows reflective rendering with correct color tones. Second, LITAR uses two noise-tolerant data capturing policies to ensure data quality, namely guided bootstrapped movement and motion-based automatic capturing. Third, to handle the mismatch between the mobile computation capability and the high computation requirement of lighting reconstruction, LITAR employs two novel real-time environment map rendering techniques called multi-resolution projection and anchor extrapolation. These two techniques effectively remove the need of time-consuming mesh reconstruction while maintaining visual quality. Lastly, LITAR provides several knobs to facilitate mobile AR application developers making quality and performance trade-offs in lighting reconstruction. We evaluated the performance of LITAR using a small-scale testbed experiment and a controlled simulation. Our testbed-based evaluation shows that LITAR achieves more visually coherent rendering effects than ARKit. Our design of multi-resolution projection significantly reduces the time of point cloud projection from about 3 seconds to 14.6 milliseconds. Our simulation shows that LITAR, on average, achieves up to 44.1% higher PSNR value than a recent work Xihe on two complex objects with physically-based materials. 
    more » « less