skip to main content

This content will become publicly available on October 1, 2022

Title: Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality
For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object’s location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1 st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1 st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.
Authors:
; ; ; ; ;
Award ID(s):
1937565
Publication Date:
NSF-PAR ID:
10316623
Journal Name:
IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
Sponsoring Org:
National Science Foundation
More Like this
  1. For optical, see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of a small virtual object is presented, where participants verbally report the virtual object's location relative to both a horizontal and vertical grid. The method is tested with a Microsoft HoloLens AR display, and examines two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors found with a HoloLens display might be due to systematic errors that are restricted to that particular display. Turning in a circle did not disrupt HoloLens tracking, and amore »second HoloLens did not suggest systematic errors restricted to a specific display. The proposed method could measure the perceived location of a virtual object to a precision of ~1 mm.« less
  2. Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtualmore »objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR.« less
  3. Agaian, Sos S. ; DelMarco, Stephen P. ; Asari, Vijayan K. (Ed.)
    High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commerciallymore »available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization« less
  4. Virtual content into a real environment. There are many factors that can affect the perceived physicality and co-presence of virtual entities, including the hardware capabilities, the fidelity of the virtual behaviors, and sensory feedback associated with the interactions. In this paper, we present a study investigating participants’ perceptions and behaviors during a time-limited search task in close proximity with virtual entities in AR. In particular, we analyze the effects of (i) visual conflicts in the periphery of an optical see-through head-mounted display, a Microsoft HoloLens, (ii) overall lighting in the physical environment, and (iii) multimodal feedback based on vibrotactile transducersmore »mounted on a physical platform. Our results show significant benefits of vibrotactile feedback and reduced peripheral lighting for spatial and social presence, and engagement. We discuss implications of these effects for AR applications.« less
  5. Objective Evaluate and model the advantage of a situation awareness (SA) supported by an augmented reality (AR) display for the ground-based joint terminal attack Controller (JTAC), in judging and describing the spatial relations between objects in a hostile zone. Background The accurate world-referenced description of relative locations of surface objects, when viewed from an oblique slant angle (aircraft, observation post) is hindered by (1) the compression of the visual scene, amplified at a lower slang angle, (2) the need for mental rotation, when viewed from a non-northerly orientation. Approach Participants viewed a virtual reality (VR)-simulated four-object scene from either ofmore »two slant angles, at each of four compass orientations, either unaided, or aided by an AR head-mounted display (AR-HMD), depicting the scene from a top-down (avoiding compression) and north-up (avoiding mental rotation) perspective. They described the geographical layout of four objects within the display. Results Compared with the control condition, that condition supported by the north-up SA display shortened the description time, particularly on non-northerly orientations (9 s, 30% benefit), and improved the accuracy of description, particularly for the more compressed scene (lower slant angle), as fit by a simple computational model. Conclusion The SA display provides large, significant benefits to this critical phase of ground-air communications in managing an attack—as predicted by the task analysis of the JTAC. Application Results impact the design of the AR-HMD to support combat ground-air communications and illustrate the magnitude by which basic cognitive principles “scale up” to realistically simulated real-world tasks such as search and rescue.« less