While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.
more »
« less
A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things
In a seminal article on augmented reality (AR) [7], Ron Azuma defines AR as a variation of virtual reality (VR), which completely immerses a user inside a synthetic environment. Azuma says “In contrast, AR allows the user to see the real world, with virtual objects superimposed upon or composited with the real world” [7] (emphasis added). Typically, a user wears a tracked stereoscopic head-mounted display (HMD) or holds a smartphone, showing the real world through optical or video means, with superimposed graphics that provide the appearance of virtual content that is related to and registered with the real world. While AR has been around since the 1960s [72], it is experiencing a renaissance of development and consumer interest. With exciting products from Microsoft (HoloLens), Metavision (Meta 2), and others; Apple’s AR Developer’s Kit (ARKit); and well-funded startups like Magic Leap [54], the future is looking even brighter, expecting that AR technologies will be absorbed into our daily lives and have a strong influence on our society in the foreseeable future.
more »
« less
- Award ID(s):
- 1800961
- PAR ID:
- 10105846
- Date Published:
- Journal Name:
- Transactions on computational science and computational intelligence
- ISSN:
- 2569-7080
- Page Range / eLocation ID:
- 37
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.more » « less
-
The popular concepts of Virtual Reality (VR) and Augmented Reality (AR) arose from our ability to interact with objects and environments that appear to be real, but are not. One of the most powerful aspects of these paradigms is the ability of virtual entities to embody a richness of behavior and appearance that we perceive as compatible with reality, and yet unconstrained by reality. The freedom to be or do almost anything helps to reinforce the notion that such virtual entities are inherently distinct from the real world—as if they were magical. This independent magical status is reinforced by the typical need for the use of “magic glasses” (head-worn displays) and “magic wands” (spatial interaction devices) that are ceremoniously bestowed on a chosen few. For those individuals, the experience is inherently egocentric in nature—the sights and sounds effectively emanate from the magic glasses, not the real world, and unlike the magic we are accustomed to from cinema, the virtual entities are unable to affect the real world. This separation of real and virtual is also inherent in our related conceptual frameworks, such as Milgram’s Virtuality Continuum, where the real and virtual are explicitly distinguished and mixed. While these frameworks are indeed conceptual, we often feel the need to position our systems and research somewhere in the continuum, further reinforcing the notion that real and virtual are distinct. The very structures of our professional societies, our research communities, our journals, and our conferences tend to solidify the evolutionary separation of the virtual from the real. However, independent forces are emerging that could reshape our notions of what is real and virtual, and transform our sense of what it means to interact with technology. First, even within the VR/AR communities, as the appearance and behavioral realism of virtual entities improves, virtual experiences will become more real. Second, as domains such as artificial intelligence, robotics, and the Internet of Things (IoT) mature and permeate throughout our lives, experiences with real things will become more virtual. The convergence of these various domains has the potential to transform the egocentric magical nature of VR/AR into more pervasive allocentric magical experiences and interfaces that interact with and can affect the real world. This transformation will blur traditional technological boundaries such that experiences will no longer be distinguished as real or virtual, and our sense for what is natural will evolve to include what we once remember as cinematic magic.more » « less
-
Augmented Reality (AR) or Mixed Reality (MR) enables innovative interactions by overlaying virtual imagery over the physical world. For roboticists, this creates new opportunities to apply proven non-verbal interaction patterns, like gesture, to physically-limited robots. However, a wealth of HRI research has demonstrated that there are real benefits to physical embodiment (compared, e.g., to virtual robots displayed on screens). This suggests that AR augmentation of virtual robot parts could lead to similar challenges. In this work, we present the design of an experiment to objectively and subjectively compare the use of AR and physical arms for deictic gesture, in AR and physical task environments. Our future results will inform robot designers choosing between the use of physical and virtual arms, and provide new nuanced understanding of the use of mixed-reality technologies in HRI contexts. Index Tmore » « less
-
This poster presents the use of Augmented Reality (AR) and Virtual Reality (VR) to tackle 4 amongst the “14 Grand Challenges for Engineering in the 21st Century” identified by National Academy of Engineering. AR and VR are the technologies of the present and the future. AR creates a composite view by adding digital content to a real world view, often by using the camera of a smartphone and VR creates an immersive view where the user’s view is often cut off from the real world. The 14 challenges identify areas of science and technology that are achievable and sustainable to assist people and the planet to prosper. The 4 challenges tackled using AR/VR application in this poster are: Enhance virtual reality, Advance personalized learning, Provide access to clean water, and Make solar energy affordable. The solar system VR application is aimed at tackling two of the engineering challenges: (1) Enhance virtual reality and (2) Advance personalized learning. The VR application assists the user in visualizing and understanding our solar system by using a VR headset. It includes an immersive 360 degree view of our solar system where the user can use controllers to interact with celestial bodies-related information and to teleport to different points in the space to have a closer look at the planets and the Sun. The user has six degrees of freedom. The AR application for water tackles the engineering challenge: “Provide access to clean water”. The AR water application shows information on drinking water accessibility and the eco-friendly usage of bottles over plastic cups within the department buildings inside Auburn University. The user of the application has an augmented view of drinking water information on a smartphone. Every time the user points the smartphone camera towards a building, the application will render a composite view with drinking water information associated to the building. The Sun path visualization AR application tackles the engineering challenge: “Make solar energy affordable”. The application helps the user visualize sun path at a selected time and location. The sun path is augmented in the camera view of the device when the user points the camera towards the sky. The application provides information on sun altitude and azimuth. Also, it provides the user with sunrise and sunset data for a selected day. The information provided by the application can aid the user with effective solar panel placement. Using AR and VR technology to tackle these challenges enhances the user experience. The information from these applications are better curated and easily visualized, thus readily understandable by the end user. Therefore, usage of AR and VR technology to tackle these type of engineering challenges looks promising.more » « less
An official website of the United States government

