skip to main content


Title: Force-Aware Interface via Electromyography for Natural VR/AR Interaction
While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.  more » « less
Award ID(s):
2027652 1729815 2232817 2225861
NSF-PAR ID:
10384309
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
41
Issue:
6
ISSN:
0730-0301
Page Range / eLocation ID:
1 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Augmented/virtual reality (AR/VR) technologies can be deployed in a household environment for applications such as checking the weather or traffic reports, watching a summary of news, or attending classes. Since AR/VR applications are highly delay sensitive, delivering these types of reports in maximum quality could be very challenging. In this paper, we consider that users go through a series of AR/VR experience units that can be delivered at different experience quality levels. In order to maximize the quality of the experience while minimizing the cost of delivering it, we aim to predict the users’ behavior in the home and the experiences they are interested in at specific moments in time. We describe a deep learning based technique to predict the users’ requests from AR/VR devices and optimize the local caching of experience units. We evaluate the performance of the proposed technique on two real-world datasets and compare our results with other baselines. Our results show that predicting users’ requests can improve the quality of experience and decrease the cost of delivery. 
    more » « less
  2. The popular concepts of Virtual Reality (VR) and Augmented Reality (AR) arose from our ability to interact with objects and environments that appear to be real, but are not. One of the most powerful aspects of these paradigms is the ability of virtual entities to embody a richness of behavior and appearance that we perceive as compatible with reality, and yet unconstrained by reality. The freedom to be or do almost anything helps to reinforce the notion that such virtual entities are inherently distinct from the real world—as if they were magical. This independent magical status is reinforced by the typical need for the use of “magic glasses” (head-worn displays) and “magic wands” (spatial interaction devices) that are ceremoniously bestowed on a chosen few. For those individuals, the experience is inherently egocentric in nature—the sights and sounds effectively emanate from the magic glasses, not the real world, and unlike the magic we are accustomed to from cinema, the virtual entities are unable to affect the real world. This separation of real and virtual is also inherent in our related conceptual frameworks, such as Milgram’s Virtuality Continuum, where the real and virtual are explicitly distinguished and mixed. While these frameworks are indeed conceptual, we often feel the need to position our systems and research somewhere in the continuum, further reinforcing the notion that real and virtual are distinct. The very structures of our professional societies, our research communities, our journals, and our conferences tend to solidify the evolutionary separation of the virtual from the real. However, independent forces are emerging that could reshape our notions of what is real and virtual, and transform our sense of what it means to interact with technology. First, even within the VR/AR communities, as the appearance and behavioral realism of virtual entities improves, virtual experiences will become more real. Second, as domains such as artificial intelligence, robotics, and the Internet of Things (IoT) mature and permeate throughout our lives, experiences with real things will become more virtual. The convergence of these various domains has the potential to transform the egocentric magical nature of VR/AR into more pervasive allocentric magical experiences and interfaces that interact with and can affect the real world. This transformation will blur traditional technological boundaries such that experiences will no longer be distinguished as real or virtual, and our sense for what is natural will evolve to include what we once remember as cinematic magic. 
    more » « less
  3. Abstract

    Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

     
    more » « less
  4. As augmented and virtual reality (AR/VR) technology matures, a method is desired to represent real-world persons visually and aurally in a virtual scene with high fidelity to craft an immersive and realistic user experience. Current technologies leverage camera and depth sensors to render visual representations of subjects through avatars, and microphone arrays are employed to localize and separate high-quality subject audio through beamforming. However, challenges remain in both realms. In the visual domain, avatars can only map key features (e.g., pose, expression) to a predetermined model, rendering them incapable of capturing the subjects’ full details. Alternatively, high-resolution point clouds can be utilized to represent human subjects. However, such three-dimensional data is computationally expensive to process. In the realm of audio, sound source separation requires prior knowledge of the subjects’ locations. However, it may take unacceptably long for sound source localization algorithms to provide this knowledge, which can still be error-prone, especially with moving objects. These challenges make it difficult for AR systems to produce real-time, high-fidelity representations of human subjects for applications such as AR/VR conferencing that mandate negligible system latency. We present Acuity, a real-time system capable of creating high-fidelity representations of human subjects in a virtual scene both visually and aurally. Acuity isolates subjects from high-resolution input point clouds. It reduces the processing overhead by performing background subtraction at a coarse resolution, then applying the detected bounding boxes to fine-grained point clouds. Meanwhile, Acuity leverages an audiovisual sensor fusion approach to expedite sound source separation. The estimated object location in the visual domain guides the acoustic pipeline to isolate the subjects’ voices without running sound source localization. Our results demonstrate that Acuity can isolate multiple subjects’ high-quality point clouds with a maximum latency of 70 ms and average throughput of over 25 fps, while separating audio in less than 30 ms. We provide the source code of Acuity at: https://github.com/nesl/Acuity. 
    more » « less
  5. This poster presents the use of Augmented Reality (AR) and Virtual Reality (VR) to tackle 4 amongst the “14 Grand Challenges for Engineering in the 21st Century” identified by National Academy of Engineering. AR and VR are the technologies of the present and the future. AR creates a composite view by adding digital content to a real world view, often by using the camera of a smartphone and VR creates an immersive view where the user’s view is often cut off from the real world. The 14 challenges identify areas of science and technology that are achievable and sustainable to assist people and the planet to prosper. The 4 challenges tackled using AR/VR application in this poster are: Enhance virtual reality, Advance personalized learning, Provide access to clean water, and Make solar energy affordable. The solar system VR application is aimed at tackling two of the engineering challenges: (1) Enhance virtual reality and (2) Advance personalized learning. The VR application assists the user in visualizing and understanding our solar system by using a VR headset. It includes an immersive 360 degree view of our solar system where the user can use controllers to interact with celestial bodies-related information and to teleport to different points in the space to have a closer look at the planets and the Sun. The user has six degrees of freedom. The AR application for water tackles the engineering challenge: “Provide access to clean water”. The AR water application shows information on drinking water accessibility and the eco-friendly usage of bottles over plastic cups within the department buildings inside Auburn University. The user of the application has an augmented view of drinking water information on a smartphone. Every time the user points the smartphone camera towards a building, the application will render a composite view with drinking water information associated to the building. The Sun path visualization AR application tackles the engineering challenge: “Make solar energy affordable”. The application helps the user visualize sun path at a selected time and location. The sun path is augmented in the camera view of the device when the user points the camera towards the sky. The application provides information on sun altitude and azimuth. Also, it provides the user with sunrise and sunset data for a selected day. The information provided by the application can aid the user with effective solar panel placement. Using AR and VR technology to tackle these challenges enhances the user experience. The information from these applications are better curated and easily visualized, thus readily understandable by the end user. Therefore, usage of AR and VR technology to tackle these type of engineering challenges looks promising. 
    more » « less