skip to main content


Title: Augmented reality-based vision-aid indoor navigation system in GPS denied environment
High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commercially available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization  more » « less
Award ID(s):
1942053
NSF-PAR ID:
10309920
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Agaian, Sos S.; DelMarco, Stephen P.; Asari, Vijayan K.
Date Published:
Journal Name:
Mobile Multimedia/Image Processing, Security, and Applications
Volume:
10993
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Dennison, Mark S. ; Krum, David M. ; Sanders-Reed, John ; Arthur, Jarvis (Ed.)
    This paper presents research on the use of penetrating radar combined with 3-D computer vision for real-time augmented reality enabled target sensing. Small scale radar systems face the issue that positioning systems are inaccurate, non-portable or challenged by poor GPS signals. The addition of modern computer vision to current cutting-edge penetrating radar technology expands the common 2-D imaging plane to 6 degrees of freedom. Applying the fact that the radar scan itself is a vector with length equivalent to depth from the transmitting and receiving antennae, these technologies used in conjunction can generate an accurate 3-D model of the internal structure of any material for which radar can penetrate. The same computer vision device that localizes the radar data can also be used as the basis for an augmented reality system. Augmented reality radar technology has applications in threat detection (human through-wall, IED, landmine) as well as civil (wall and door structure, buried item detection). For this project, the goal is to create a data registration pipeline and display the radar scan data visually in a 3-D environment using localization from a computer vision tracking device. Processed radar traces are overlayed in real time to an augmented reality screen where the user can view the radar signal intensity to identify and classify targets. 
    more » « less
  2. Accurate indoor positioning has attracted a lot of attention for a variety of indoor location-based applications, with the rapid development of mobile devices and their onboard sensors. A hybrid indoor localization method is proposed based on single off-the-shelf smartphone, which takes advantage of its various onboard sensors, including camera, gyroscope and accelerometer. The proposed approach integrates three components: visual-inertial odometry (VIO), point-based area mapping, and plane-based area mapping. A simplified RANSAC strategy is employed in plane matching for the sake of processing time. Since Apple's augmented reality platform ARKit has many powerful high-level APIs on world tracking, plane detection and 3D modeling, a practical smartphone app for indoor localization is developed on an iPhone that can run ARKit. Experimental results demonstrate that our plane-based method can achieve an accuracy of about 0.3 meter, which is based on a much more lightweight model, but achieves more accurate results than the point-based model by directly using ARKit's area mapping. The size of the plane-based model is less than 2KB for a closed-loop corridor area of about 45m*15m, comparing to about 10MB of the point-based model. 
    more » « less
  3. Indoor navigation is necessary for users to explore large unfamiliar indoor environments such as airports, shopping malls, and hospital complex, which relies on the capability of continuously tracking a user's location. A typical indoor navigation system is built on top of a suitable Indoor Positioning System (IPS) and requires the user to periodically submit location queries to learn their whereabouts whereby to provide update-to-date navigation information. Received signal strength (RSS)-based IPSes are considered as one of the most classical IPSes, which locates a user by comparing the user's RSS measurement with the fingerprints collected at different locations in advance. Despite its significant advantages, existing RSS-IPSes suffer from two key challenges, the ambiguity of RSS fingerprints and device diversity, that may greatly reduce its positioning accuracy. In this paper, we introduce the design and evaluation of CITS, a novel RSS-based continuous indoor tracking system that can effectively cope with fingerprint ambiguity and device diversity via differential RSS fingerprint matching. Detailed experiment studies confirm the significant advantages of CITS over prior RSS-based solutions. 
    more » « less
  4. null (Ed.)
    The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method for estimating the pose of an indoor robot that successfully enables navigation to specified waypoints. 
    more » « less
  5. Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter. 
    more » « less