Abstract— Navigation, the ability to relocate from one place to another, is a critical skill for any individual or group. Navigating safely across unknown environments is a critical factor in determining the success of a mission. While there is an existing body of applications in the field of land navigation, they primarily rely on GPS-enabled technologies. Moreover, there is limited research on Augmented Reality (AR) as a tool for navigation in unknown environments. This research proposes to develop an AR system to provide 3-dimensional (3D) navigational insights in unfamiliar environments. This can be accomplished by generating 3D terrestrial maps leveraging Synthetic Aperture Radar (SAR) data, Google earth imagery and sparse knowledge of GPS coordinates of the region. Furthermore, the 3D terrestrial images are converted to navigational meshes to make it feasible for path-finding algorithms to work. The proposed method can be used to create an iteratively refined 3D landscape knowledge-database that can assist personnel in navigating novel environments or assist in mission planning for other operations. It can also be used to help plan/access the best strategic vantage points in the landscape. Keywords— navigation, three-dimensional, image processing, mesh, augmented reality, mixed reality, SAR, GPS
more »
« less
Adaptive filtering of physical-virtual artifacts for synthetic animatronics
Spatial Augmented Reality (SAR), e.g., based on monoscopic projected imagery on physical three-dimensional (3D) surfaces, can be particularly well-suited for ad hoc group or multi-user augmented reality experiences since it does not encumber users with head-worn or carried devices. However, conveying a notion of realistic 3D shapes and movements on SAR surfaces using monoscopic imagery is a difficult challenge. While previous work focused on physical actuation of such surfaces to achieve geometrically dynamic content, we introduce a different concept, which we call “Synthetic Animatronics,” i.e., conveying geometric movement or deformation purely through manipulation of the imagery being shown on a static display surface. We present a model for the distribution of the viewpoint-dependent distortion that occurs when there are discrepancies between the physical display surface and the virtual object being represented, and describe a realtime implementation for a method of adaptively filtering the imagery based on an approximation of expected potential error. Finally, we describe an existing physical SAR setup well-suited for synthetic animatronics and a corresponding Unity-based SAR simulator allowing for flexible exploration and validation of the technique and various parameters.
more »
« less
- PAR ID:
- 10105837
- Date Published:
- Journal Name:
- International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Projected augmented reality, also called projection mapping or video mapping, is a form of augmented reality that uses projected light to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its ability to add a layer of digital content directly onto physical objects or environments in a way that can be instantaneously viewed by multiple people, unencumbered by a screen or additional setup.more » « less
-
Augmented reality (AR) is a technology that integrates 3D virtual objects into the physical world in real-time, while virtual reality (VR) is a technology that immerses users in an interactive 3D virtual environment. The fast development of augmented reality (AR) and virtual reality (VR) technologies has reshaped how people interact with the physical world. This presentation will outline the results from two unique AR and one Web-based VR coastal engineering projects, motivating the next stage in the development of the augmented reality package for coastal students, engineers, and planners.more » « less
-
Eye tracking has already made its way to current commercial wearable display devices, and is becoming increasingly important for virtual and augmented reality applications. However, the existing model-based eye tracking solutions are not capable of conducting very accurate gaze angle measurements, and may not be sufficient to solve challenging display problems such as pupil steering or eyebox expansion. In this paper, we argue that accurate detection and localization of pupil in 3D space is a necessary intermediate step in model-based eye tracking. Existing methods and datasets either ignore evaluating the accuracy of 3D pupil localization or evaluate it only on synthetic data. To this end, we capture the first 3D pupilgaze-measurement dataset using a high precision setup with head stabilization and release it as the first benchmark dataset to evaluate both 3D pupil localization and gaze tracking methods. Furthermore, we utilize an advanced eye model to replace the commonly used oversimplified eye model. Leveraging the eye model, we propose a novel 3D pupil localization method with a deep learning-based corneal refraction correction. We demonstrate that our method outperforms the state-of-the-art works by reducing the 3D pupil localization error by 47.5% and the gaze estimation error by 18.7%. Our dataset and codes can be found here: link.more » « less
-
In this paper, we explore how a familiarly shaped object can serve as a physical proxy to manipulate virtual objects in Augmented Reality (AR) environments. Using the example of a tangible, handheld sphere, we demonstrate how irregularly shaped virtual objects can be selected, transformed, and released. After a brief description of the implementation of the tangible proxy, we present a buttonless interaction technique suited to the characteristics of the sphere. In a user study (N = 30), we compare our approach with three different controller-based methods that increasingly rely on physical buttons. As a use case, we focused on an alignment task that had to be completed in mid-air as well as on a flat surface. Results show that our concept has advantages over two of the controller-based methods regarding task completion time and user ratings. Our findings inform research on integrating tangible interaction into AR experiences.more » « less