Mixed reality (MR) interactions feature users interacting with a combination of virtual and physical components. Inspired by research investigating aspects associated with near-field interactions in augmented and virtual reality (AR & VR), we investigated how avatarization, the physicality of the interacting components, and the interaction technique used to manipulate a virtual object affected performance and perceptions of user experience in a mixed reality fundamentals of laparoscopic peg-transfer task wherein users had to transfer a virtual ring from one peg to another for a number of trials. We employed a 3 (Physicality of pegs) X 3 (Augmented Avatar Representation) X 2 (Interaction Technique) multi-factorial design, manipulating the physicality of the pegs as a between-subjects factor, the type of augmented self-avatar representation, and the type of interaction technique used for object-manipulation as within-subjects factors. Results indicated that users were significantly more accurate when the pegs were virtual rather than physical because of the increased salience of the task-relevant visual information. From an avatar perspective, providing users with a reach envelope-extending representation, though useful, was found to worsen performance, while co-located avatarization significantly improved performance. Choosing an interaction technique to manipulate objects depends on whether accuracy or efficiency is a priority. Finally, the relationship between the avatar representation and interaction technique dictates just how usable mixed reality interactions are deemed to be.
more »
« less
The Mixed Reality Passthrough Window: Rethinking the Laptop Videoconferencing Experience
The growth in remote and hybrid work has resulted in an increased demand for collaborative, videoconferencing experiences that offer a more seamless and immersive transition between virtual and physical environments. The Mixed Reality Passthrough Window (MRPW) addresses this demand by introducing a new paradigm for the integration of augmented/mixed reality into laptop design. The design is characterized by two screens, situated back to back, with two mounted cameras, facing in opposite directions. This creates the effect of looking through a window, upon which virtual content can be augmented and overlaid. This configuration allows local users sitting around the laptop to more easily interact with remote users, who appear on both sides of the Mixed Reality Passthrough Window, giving the sense that all users are sharing the same space in the round. Additionally, these features create affordances for the outward facing screen to serve as a site for presentations (e.g. slide decks) and other sharable content.
more »
« less
- Award ID(s):
- 2124312
- PAR ID:
- 10481915
- Publisher / Repository:
- AHFE Open Access Proceedings Human
- Date Published:
- Journal Name:
- Accelerating Open Access Science in Human Factors Engineering and Human-Centered Computing
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Augmented reality (AR) is a technology that integrates 3D virtual objects into the physical world in real-time, while virtual reality (VR) is a technology that immerses users in an interactive 3D virtual environment. The fast development of augmented reality (AR) and virtual reality (VR) technologies has reshaped how people interact with the physical world. This presentation will outline the results from two unique AR and one Web-based VR coastal engineering projects, motivating the next stage in the development of the augmented reality package for coastal students, engineers, and planners.more » « less
-
In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.more » « less
-
Lighting understanding plays an important role in virtual object composition, including mobile augmented reality (AR) applications. Prior work often targets recovering lighting from the physical environment to support photorealistic AR rendering. Because the common workflow is to use a back-facing camera to capture the physical world for overlaying virtual objects, we refer to this usage pattern as back-facing AR. However, existing methods often fall short in supporting emerging front-facing mobile AR applications, e.g., virtual try-on where a user leverages a front-facing camera to explore the effect of various products (e.g., glasses or hats) of different styles. This lack of support can be attributed to the unique challenges of obtaining 360° HDR environment maps, an ideal format of lighting representation, from the front-facing camera and existing techniques. In this paper, we propose to leverage dual-camera streaming to generate a high-quality environment map by combining multi-view lighting reconstruction and parametric directional lighting estimation. Our preliminary results show improved rendering quality using a dual-camera setup for front-facing AR compared to a commercial solution.more » « less
-
Goal: address the disconnect between science, design, and technology at the high school level. Objectives: 1. integrate art/design into STEM education (STEAM), 2. foster plant science knowledge, 3. apply augmented and virtual reality (AVR) technologies, and 4. inspire interest in and provide skills for future STEAM careers. Collaborative teams of self-identified science, technophile, and art students receive training in 3D modeling. With support from scientists, the students create models of research plants, practice science communication skills during public/scientific events, and make connections to real-life situations using AVR devices. We use a mixed-methods assessment approach. Results from the first year of this project indicate that students are more aware of the role of art/design in science and vice versa. Students acknowledge the benefits of productive failure when facing challenges creating 3D models and are more interested in STEAM career paths.more » « less
An official website of the United States government
