skip to main content

Title: Augmented reality-based vision-aid indoor navigation system in GPS denied environment
High accuracy localization and user positioning tracking is critical in improving the quality of augmented reality environments. The biggest challenge facing developers is localizing the user based on visible surroundings. Current solutions rely on the Global Positioning System (GPS) for tracking and orientation. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not accurate enough for augmented reality, which needs precision measured in millimeters or smaller. This paper describes the development and demonstration of a head-worn augmented reality (AR) based vision-aid indoor navigation system, which localizes the user without relying on a GPS signal. Commercially available augmented reality head-set allows individuals to capture the field of vision using the front-facing camera in a real-time manner. Utilizing captured image features as navigation-related landmarks allow localizing the user in the absence of a GPS signal. The proposed method involves three steps: a detailed front-scene camera data is collected and generated for landmark recognition; detecting and locating an individual’s current position using feature matching, and display arrows to indicate areas that require more data collects if needed. Computer simulations indicate that the proposed augmented reality-based vision-aid indoor navigation system can provide precise simultaneous localization and mapping in a GPS-denied environment. Keywords: Augmented-reality, navigation, GPS, HoloLens, vision, positioning system, localization
Authors:
; ; ; ;
Editors:
Agaian, Sos S.; DelMarco, Stephen P.; Asari, Vijayan K.
Award ID(s):
1942053
Publication Date:
NSF-PAR ID:
10309920
Journal Name:
Mobile Multimedia/Image Processing, Security, and Applications
Volume:
10993
Sponsoring Org:
National Science Foundation
More Like this
  1. Dennison, Mark S. ; Krum, David M. ; Sanders-Reed, John ; Arthur, Jarvis (Ed.)
    This paper presents research on the use of penetrating radar combined with 3-D computer vision for real-time augmented reality enabled target sensing. Small scale radar systems face the issue that positioning systems are inaccurate, non-portable or challenged by poor GPS signals. The addition of modern computer vision to current cutting-edge penetrating radar technology expands the common 2-D imaging plane to 6 degrees of freedom. Applying the fact that the radar scan itself is a vector with length equivalent to depth from the transmitting and receiving antennae, these technologies used in conjunction can generate an accurate 3-D model of the internal structure of any material for which radar can penetrate. The same computer vision device that localizes the radar data can also be used as the basis for an augmented reality system. Augmented reality radar technology has applications in threat detection (human through-wall, IED, landmine) as well as civil (wall and door structure, buried item detection). For this project, the goal is to create a data registration pipeline and display the radar scan data visually in a 3-D environment using localization from a computer vision tracking device. Processed radar traces are overlayed in real time to an augmented reality screen wheremore »the user can view the radar signal intensity to identify and classify targets.« less
  2. Accurate indoor positioning has attracted a lot of attention for a variety of indoor location-based applications, with the rapid development of mobile devices and their onboard sensors. A hybrid indoor localization method is proposed based on single off-the-shelf smartphone, which takes advantage of its various onboard sensors, including camera, gyroscope and accelerometer. The proposed approach integrates three components: visual-inertial odometry (VIO), point-based area mapping, and plane-based area mapping. A simplified RANSAC strategy is employed in plane matching for the sake of processing time. Since Apple's augmented reality platform ARKit has many powerful high-level APIs on world tracking, plane detection and 3D modeling, a practical smartphone app for indoor localization is developed on an iPhone that can run ARKit. Experimental results demonstrate that our plane-based method can achieve an accuracy of about 0.3 meter, which is based on a much more lightweight model, but achieves more accurate results than the point-based model by directly using ARKit's area mapping. The size of the plane-based model is less than 2KB for a closed-loop corridor area of about 45m*15m, comparing to about 10MB of the point-based model.
  3. Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or globalmore »coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter.« less
  4. The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method formore »estimating the pose of an indoor robot that successfully enables navigation to specified waypoints.« less
  5. The development of modern cities heavily relies on the availability and quality of underground utilities that provide drinking water, sewage, electric power, and telecommunication services to sus- tain its growing population. However, the information of localiza- tion and condition of subterranean infrastructures is generally not readily available, especially in areas with congested pipes, which impacts urban development, as poorly documented pipes may be hit during construction, affecting services and causing costly de- lays. Furthermore, aging components are prone to failure and may lead to resources waste or the interruption of services. Ground penetrating radar (GPR) is a promising remote sensing technique that has been recently used for mapping and assessment of under- ground infrastructure. However, current commercial GPR survey systems are designed with wheel-encoders or GPS for positioning. Wheel-encoder based GPR surveys are restrained to linear-route only, preventing the use of GPR for accurate localization of city wide underground infrastructure inspection. While GPS signal is degraded in urban canyons and unavailable in city tunnels. In this work, we present a new GPR system integration with augmented reality (AR) based positioning that can overcome the limitations of current GPR systems to enable arbitrary-route scanning with a high fidelity. It has themore »potential for automation of GPR survey and integration with AR smartphone applications that could be used for better planning in urban development.« less