Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges.In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts.
more »
« less
Investigating the Effects of Avatarization and Interaction Techniques on Near-field Mixed Reality Interactions with Physical Components
Mixed reality (MR) interactions feature users interacting with a combination of virtual and physical components. Inspired by research investigating aspects associated with near-field interactions in augmented and virtual reality (AR & VR), we investigated how avatarization, the physicality of the interacting components, and the interaction technique used to manipulate a virtual object affected performance and perceptions of user experience in a mixed reality fundamentals of laparoscopic peg-transfer task wherein users had to transfer a virtual ring from one peg to another for a number of trials. We employed a 3 (Physicality of pegs) X 3 (Augmented Avatar Representation) X 2 (Interaction Technique) multi-factorial design, manipulating the physicality of the pegs as a between-subjects factor, the type of augmented self-avatar representation, and the type of interaction technique used for object-manipulation as within-subjects factors. Results indicated that users were significantly more accurate when the pegs were virtual rather than physical because of the increased salience of the task-relevant visual information. From an avatar perspective, providing users with a reach envelope-extending representation, though useful, was found to worsen performance, while co-located avatarization significantly improved performance. Choosing an interaction technique to manipulate objects depends on whether accuracy or efficiency is a priority. Finally, the relationship between the avatar representation and interaction technique dictates just how usable mixed reality interactions are deemed to be.
more »
« less
- Award ID(s):
- 2007435
- PAR ID:
- 10527839
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Visualization and Computer Graphics
- Volume:
- 30
- Issue:
- 5
- ISSN:
- 1077-2626
- Page Range / eLocation ID:
- 2756 to 2766
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.more » « less
-
In this paper, we explore how a familiarly shaped object can serve as a physical proxy to manipulate virtual objects in Augmented Reality (AR) environments. Using the example of a tangible, handheld sphere, we demonstrate how irregularly shaped virtual objects can be selected, transformed, and released. After a brief description of the implementation of the tangible proxy, we present a buttonless interaction technique suited to the characteristics of the sphere. In a user study (N = 30), we compare our approach with three different controller-based methods that increasingly rely on physical buttons. As a use case, we focused on an alignment task that had to be completed in mid-air as well as on a flat surface. Results show that our concept has advantages over two of the controller-based methods regarding task completion time and user ratings. Our findings inform research on integrating tangible interaction into AR experiences.more » « less
-
Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices ... (For more, see "View full record.")more » « less
-
Immersive Analytics (IA) and consumer adoption of augmented reality (AR) and virtual reality (VR) head-mounted displays (HMDs) are both rapidly growing. When used in conjunction, stereoscopic IA environments can offer improved user understanding and engagement; however, it is unclear how the choice of stereoscopic display impacts user interactions within an IA environment. This paper presents a pilot study that examines the impact of stereoscopic display choice on object manipulation and environmental navigation using consumeravailable AR and VR HMDs. Our observations indicate that the display can impact how users manipulate virtual content and how they navigate the environment.more » « less
An official website of the United States government

