Mixed reality (MR) interactions feature users interacting with a combination of virtual and physical components. Inspired by research investigating aspects associated with near-field interactions in augmented and virtual reality (AR & VR), we investigated how avatarization, the physicality of the interacting components, and the interaction technique used to manipulate a virtual object affected performance and perceptions of user experience in a mixed reality fundamentals of laparoscopic peg-transfer task wherein users had to transfer a virtual ring from one peg to another for a number of trials. We employed a 3 (Physicality of pegs) X 3 (Augmented Avatar Representation) X 2 (Interaction Technique) multi-factorial design, manipulating the physicality of the pegs as a between-subjects factor, the type of augmented self-avatar representation, and the type of interaction technique used for object-manipulation as within-subjects factors. Results indicated that users were significantly more accurate when the pegs were virtual rather than physical because of the increased salience of the task-relevant visual information. From an avatar perspective, providing users with a reach envelope-extending representation, though useful, was found to worsen performance, while co-located avatarization significantly improved performance. Choosing an interaction technique to manipulate objects depends on whether accuracy or efficiency is a priority. Finally, the relationship between the avatar representation and interaction technique dictates just how usable mixed reality interactions are deemed to be.
more »
« less
3D-Model-Based Augmented Reality for Enhancing Physical Architectural Models
In the presentation of architectural projects, physical models are still commonly used as a powerful and effective representation for building design and construction. On the other hand, Augmented Reality (AR) promises a wide range of possibilities in visualizing and interacting with 3D physical models, enhancing the modeling process. To benefit both, we present a novel medium for architectural representation: a marker-less AR powered physical architectural model that employs dynamic digital features. With AR enhancement, physical capabilities of a model could be extended without sacrificing its tangibility. We developed a framework to investigate the potential uses of 3D-model- based AR registration method and its augmentation on physical architectural models. To explore and demonstrate integration of physical and virtual models in AR, we designed this framework providing physical and virtual model interaction: a user can manipulate the physical model parts or control the visibility and dynamics of the virtual parts in AR. The framework consists of a LEGO model and an AR application on a hand-held device which was developed for this framework. The AR application utilizes a marker-less AR registration method and employs a 3D-model-based AR registration. A LEGO model was proposed as the physical 3D model in this registration process and machine learning training using Vuforia was utilized for the AR application to recognize the LEGO model from any point of view to register the virtual models in AR. The AR application also employs a user interface that allows user interaction with the virtual parts augmented on the physical ones. The working application was tested over its registration, physical and virtual interactions. Overall, the adoption of AR and its combination with physical models, and 3D-model-based AR registration allow for many advantages, which are discussed in the paper.
more »
« less
- Award ID(s):
- 2119549
- PAR ID:
- 10428963
- Editor(s):
- Pak, B
- Date Published:
- Journal Name:
- Proceedings of the 40th Conference on Education and Research in Computer Aided Architectural Design in Europe (eCAADe 2022)
- Volume:
- 2
- Page Range / eLocation ID:
- 495–504
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an end-to-end authoring tool that supports users to create in-situ freehand AR applications through embodied demonstration and visual programming. During authoring, users can intuitively demonstrate the customized gesture inputs while referring to the spatial and temporal context. Based on the taxonomy of gestures in AR, we proposed a hand interaction model which maps the gesture inputs to the reactions of the AR contents. Thus, users can author comprehensive freehand applications using trigger-action visual programming and instantly experience the results in AR. Further, we demonstrate multiple application scenarios enabled by GesturAR, such as interactive virtual objects, robots, and avatars, room-level interactive AR spaces, embodied AR presentations, etc. Finally, we evaluate the performance and usability of GesturAR through a user study.more » « less
-
During emergencies communicating in multi-level built environment becomes challenging because architectural complexity can create problems with visual and mental representation of 3D space. Our Hololens application gives a visual representation of a building on campus in 3D space, allowing people to see where exits are in the building as well as creating alerts for anomalous behavior for emergency response such as active shooter, fire, and smoke. It also gives path to the various exits; shortest path to the exits as well as directions to a safe zone from their current position. The augmented reality (AR) application was developed in Unity 3D for Microsoft HoloLens and also is deployed on tablets and smartphones. It is a fast and robust marker detection technique inspired by the use of Vuforia AR library. Our aim is to enhance the evacuation process by ensuring that all building patrons know all of the building exits and how to get to them, which improves evacuation time and eradicates the injuries and fatalities occurring during indoor crises such as building fires and active shooter events. We have incorporated existing permanent features in the building as markers for the AR application to trigger the floor plan and subsequent location of the person in the building. This work also describes the system architecture as well as the design and implementation of this AR application to leverage HoloLens for building evacuation purposes. We believe that AR technologies like HoloLens could be adopted for all building evacuating strategies during emergencies as it offers a moremore » « less
-
BRICKxAR (Multi 3D Models/M3D) prototype offers markerless, in-situ, and step-by-step, highly accurate Augmented Reality (AR) assembly instructions for large or small part assembly. The prototype employs multiple assembly phases of deep learning-trained 3D model-based AR registration coupled with a step count. This ensures object recognition and tracking persist while the model updates at each step, even if a part's location is not visible to the AR camera. The use of phases simplifies the complex assembly instructions. The testing and heuristic evaluation findings indicate that BRICKxAR (M3D) provides robust instructions for assembly, promising potential applicability at different scales and scenarios.more » « less
-
We present an Augmented Reality (AR) experience, enabling user interaction with a Virtual Human (VH) of an older adult. We demonstrate the feasibility of the technology to foster communication and social connection between caregivers (users) and older adults (the VH). We developed a 3D model of an embodied virtual geriatric patient that demonstrates awareness of its environment and conversations, and implemented a networking protocol for remote response control with a human in the loop.more » « less
An official website of the United States government

