Optical see-through Augmented Reality (OST-AR) is a developing technology with exciting applications including medicine, industry, education, and entertainment. OST-AR creates a mix of virtual and real using an optical combiner that blends images and graphics with the real-world environment. Such an overlay of visual information is simultaneously futuristic and familiar: like the sci-fi navigation and communication interfaces in movies, but also much like banal reflections in glass windows. OSTAR’s transparent displays cause background bleed-through, which distorts color and contrast, yet virtual content is usually easily understandable. Perceptual scission, or the cognitive separation of layers, is an important mechanism, influenced by transparency, depth, parallax, and more, that helps us see what is real and what is virtual. In examples from Pepper’s Ghost, veiling luminance, mixed material modes, window shopping, and today’s OST-AR systems, transparency and scission provide surprising – and ordinary – results. Ongoing psychophysical research is addressing perceived characteristics of color, material, and images in OST-AR, testing and harnessing the perceptual effects of transparency and scission. Results help both understand the visual mechanisms and improve tomorrow’s AR systems.
more »
« less
A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality Systems
Abstract Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.
more »
« less
- PAR ID:
- 10473784
- Publisher / Repository:
- American Society of Mechanical Engineers (ASME)
- Date Published:
- Journal Name:
- Journal of Computing and Information Science in Engineering
- Volume:
- 24
- Issue:
- 3
- ISSN:
- 1530-9827
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Mobile Augmented Reality (AR) provides immersive experiences by aligning virtual content (holograms) with a view of the real world. When a user places a hologram it is usually expected that like a real object, it remains in the same place. However, positional errors frequently occur due to inaccurate environment mapping and device localization, to a large extent determined by the properties of natural visual features in the scene. In this demonstration we present SceneIt, the first visual environment rating system for mobile AR based on predictions of hologram positional error magnitude. SceneIt allows users to determine if virtual content placed in their environment will drift noticeably out of position, without requiring them to place that content. It shows that the severity of positional error for a given visual environment is predictable, and that this prediction can be calculated with sufficiently high accuracy and low latency to be useful in mobile AR applications.more » « less
-
Green, Phil (Ed.)Head‐mounted virtual reality (VR) and augmented reality (AR) systems deliver colour imagery directly to a user's eyes, presenting position‐aware, real‐time computer graphics to create the illusion of interacting with a virtual world. In some respects, colour in AR and VR can be modelled and controlled much like colour in other display technologies. However, it is complicated by the optics required for near‐eye display, and in the case of AR, by the merging of real‐world and virtual visual stimuli. Methods have been developed to provide predictable colour in VR, and ongoing research has exposed details of the visual perception of real and virtual in AR. Yet, more work is required to make colour appearance predictable and AR and VR display systems more robust.more » « less
-
null (Ed.)Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR, an in-situ programming tool that supports users to rapidly author context-aware applications by referring to their previous activities. We customize an AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user's daily activities. During authoring, we reconstruct the captured data in AR with an animated avatar and use virtual icons to represent the surrounding environment. With our visual programming interface, users create human-centered rules for the applications and experience them instantly in AR. We further demonstrate four use cases enabled by CAPturAR. Also, we verify the effectiveness of the AR-HMD and the authoring workflow with a system evaluation using our prototype. Moreover, we conduct a remote user study in an AR simulator to evaluate the usability.more » « less
-
In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.more » « less
An official website of the United States government

