skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, July 11 until 2:00 AM ET on Saturday, July 12 due to maintenance. We apologize for the inconvenience.


Title: Augmented reality-based vision-aid indoor navigation system in GPS denied environment
High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commercially available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization  more » « less
Award ID(s):
1942053
PAR ID:
10309920
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Agaian, Sos S.; DelMarco, Stephen P.; Asari, Vijayan K.
Date Published:
Journal Name:
Mobile Multimedia/Image Processing, Security, and Applications
Volume:
10993
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Dennison, Mark S.; Krum, David M.; Sanders-Reed, John; Arthur, Jarvis (Ed.)
    This paper presents research on the use of penetrating radar combined with 3-D computer vision for real-time augmented reality enabled target sensing. Small scale radar systems face the issue that positioning systems are inaccurate, non-portable or challenged by poor GPS signals. The addition of modern computer vision to current cutting-edge penetrating radar technology expands the common 2-D imaging plane to 6 degrees of freedom. Applying the fact that the radar scan itself is a vector with length equivalent to depth from the transmitting and receiving antennae, these technologies used in conjunction can generate an accurate 3-D model of the internal structure of any material for which radar can penetrate. The same computer vision device that localizes the radar data can also be used as the basis for an augmented reality system. Augmented reality radar technology has applications in threat detection (human through-wall, IED, landmine) as well as civil (wall and door structure, buried item detection). For this project, the goal is to create a data registration pipeline and display the radar scan data visually in a 3-D environment using localization from a computer vision tracking device. Processed radar traces are overlayed in real time to an augmented reality screen where the user can view the radar signal intensity to identify and classify targets. 
    more » « less
  2. There are a wide variety of mobile phone emergency response applications exist for both indoor and outdoor environments. However, outdoor applications mostly provide accident and navigation information to users, and indoor applications are limited to the unavailability of GPS positioning and WiFi access problems. This paper describes the proposed mobile augmented reality system (MARS) that allows both outdoor and indoor users to retrieve and manage information for emergency response and navigation that is spatially registered with the real world. The proposed MARS utilizes feature extraction for location sensing in indoor environments as during emergencies GPS and WiFi systems might not work. This paper describes the implementation of this MARS deployed on tablets and smartphones for building evacuation purposes. The MARS delivers critical evacuation information to smartphone users in the indoor environment and navigation information in the outdoor environments. A limited user study was conducted to test the effectiveness of the proposed MARS using the mobile phone usability questionnaire (MPUQ) framework. The results show that AR features were well integrated into the MARS and it will help identify the nearest exit in the building during the emergency evacuation. 
    more » « less
  3. Accurate indoor positioning has attracted a lot of attention for a variety of indoor location-based applications, with the rapid development of mobile devices and their onboard sensors. A hybrid indoor localization method is proposed based on single off-the-shelf smartphone, which takes advantage of its various onboard sensors, including camera, gyroscope and accelerometer. The proposed approach integrates three components: visual-inertial odometry (VIO), point-based area mapping, and plane-based area mapping. A simplified RANSAC strategy is employed in plane matching for the sake of processing time. Since Apple's augmented reality platform ARKit has many powerful high-level APIs on world tracking, plane detection and 3D modeling, a practical smartphone app for indoor localization is developed on an iPhone that can run ARKit. Experimental results demonstrate that our plane-based method can achieve an accuracy of about 0.3 meter, which is based on a much more lightweight model, but achieves more accurate results than the point-based model by directly using ARKit's area mapping. The size of the plane-based model is less than 2KB for a closed-loop corridor area of about 45m*15m, comparing to about 10MB of the point-based model. 
    more » « less
  4. Indoor navigation is necessary for users to explore large unfamiliar indoor environments such as airports, shopping malls, and hospital complex, which relies on the capability of continuously tracking a user's location. A typical indoor navigation system is built on top of a suitable Indoor Positioning System (IPS) and requires the user to periodically submit location queries to learn their whereabouts whereby to provide update-to-date navigation information. Received signal strength (RSS)-based IPSes are considered as one of the most classical IPSes, which locates a user by comparing the user's RSS measurement with the fingerprints collected at different locations in advance. Despite its significant advantages, existing RSS-IPSes suffer from two key challenges, the ambiguity of RSS fingerprints and device diversity, that may greatly reduce its positioning accuracy. In this paper, we introduce the design and evaluation of CITS, a novel RSS-based continuous indoor tracking system that can effectively cope with fingerprint ambiguity and device diversity via differential RSS fingerprint matching. Detailed experiment studies confirm the significant advantages of CITS over prior RSS-based solutions. 
    more » « less
  5. Blind & visually impaired (BVI) individuals and those with Autism Spectrum Disorder (ASD) each face unique challenges in navigating unfamiliar indoor environments. In this paper, we propose an indoor positioning and navigation system that guides a user from point A to point B indoors with high accuracy while augmenting their situational awareness. This system has three major components: location recognition (a hybrid indoor localization app that uses Bluetooth Low Energy beacons and Google Tango to provide high accuracy), object recognition (a body-mounted camera to provide the user momentary situational awareness of objects and people), and semantic recognition (map-based annotations to alert the user of static environmental characteristics). This system also features personalized interfaces built upon the unique experiences that both BVI and ASD individuals have in indoor wayfinding and tailors its multimodal feedback to their needs. Here, the technical approach and implementation of this system are discussed, and the results of human subject tests with both BVI and ASD individuals are presented. In addition, we discuss and show the system’s user-centric interface and present points for future work and expansion. 
    more » « less