skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Title: Direct Aerial Visual Geolocalization Using Deep Neural Networks
Unmanned aerial vehicles (UAVs) must keep track of their location in order to maintain flight plans. Currently, this task is almost entirely performed by a combination of Inertial Measurement Units (IMUs) and reference to GNSS (Global Navigation Satellite System). Navigation by GNSS, however, is not always reliable, due to various causes both natural (reflection and blockage from objects, technical fault, inclement weather) and artificial (GPS spoofing and denial). In such GPS-denied situations, it is desirable to have additional methods for aerial geolocalization. One such method is visual geolocalization, where aircraft use their ground facing cameras to localize and navigate. The state of the art in many ground-level image processing tasks involve the use of Convolutional Neural Networks (CNNs). We present here a study of how effectively a modern CNN designed for visual classification can be applied to the problem of Absolute Visual Geolocalization (AVL, localization without a prior location estimate). An Xception based architecture is trained from scratch over a >1000 km2 section of Washington County, Arkansas to directly regress latitude and longitude from images from different orthorectified high-altitude survey flights. It achieves average localization accuracy on unseen image sets over the same region from different years and seasons with as low as 115 m average error, which localizes to 0.004% of the training area, or about 8% of the width of the 1.5 × 1.5 km input image. This demonstrates that CNNs are expressive enough to encode robust landscape information for geolocalization over large geographic areas. Furthermore, discussed are methods of providing uncertainty for CNN regression outputs, and future areas of potential improvement for use of deep neural networks in visual geolocalization.  more » « less
Award ID(s):
1946391
NSF-PAR ID:
10321717
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Remote Sensing
Volume:
13
Issue:
19
ISSN:
2072-4292
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Pollen is used to investigate a diverse range of ecological problems, from identifying plant–pollinator relationships to tracking flowering phenology. Pollen types are identified according to a set of distinctive morphological characters which are understood to capture taxonomic differences and phylogenetic relationships among taxa. However, categorizing morphological variation among hyperdiverse pollen samples represents a challenge even for an expert analyst.

    We present an automated workflow for pollen analysis, from the automated scanning of pollen sample slides to the automated detection and identification of pollen taxa using convolutional neural networks (CNNs). We analysed aerial pollen samples from lowland Panama and used a microscope slide scanner to capture three‐dimensional representations of 150 sample slides. These pollen sample images were annotated by an expert using a virtual microscope. Metadata were digitally recorded for ~100 pollen grains per slide, including location, identification and the analyst's confidence of the given identification. We used these annotated images to train and test our detection and classification CNN models. Our approach is two‐part. We first compared three methods for training CNN models to detect pollen grains on a palynological slide. We next investigated approaches to training CNN models for pollen identification.

    Because the diversity of pollen taxa in environmental and palaeontological samples follows a long‐tailed distribution, we experimented with methods for addressing imbalanced representation using our most abundant 46 taxa. We found that properly weighting pollen taxa in our training objective functions yielded improved accuracy for individual taxa. Our average accuracy for the 46‐way classification problem was 82.3%. We achieved 89.5% accuracy for our 25 most abundant taxa.

    Pollen represents a challenging visual classification problem that can serve as a model for other areas of biology that rely on visual identification. Our results add to the body of research demonstrating the potential for a fully automated pollen classification system for environmental and palaeontological samples. Slide imaging, pollen detection and specimen identification can be automated to produce a streamlined workflow.

     
    more » « less
  2. Abstract. Here we describe the curriculum and outcomes from a data-intensivegeomorphic analysis course, “Geoscience Field Issues Using High-ResolutionTopography to Understand Earth Surface Processes”, which pivoted to virtualin 2020 due to the COVID-19 pandemic. The curriculum covers technologies formanual and remotely sensed topographic data methods, including (1) GlobalPositioning Systems and Global Navigation Satellite System (GPS/GNSS)surveys, (2) Structure from Motion (SfM) photogrammetry, and (3) ground-based(terrestrial laser scanning, TLS) and airborne lidar. Course content focuseson Earth-surface process applications but could be adapted for othergeoscience disciplines. Many other field courses were canceled in summer2020, so this course served a broad range of undergraduate and graduatestudents in need of a field course as part of degree or researchrequirements. Resulting curricular materials are available freely within theNational Association of Geoscience Teachers' (NAGT's) “Teaching with Online Field Experiences” collection. Theauthors pre-collected GNSS data, uncrewed-aerial-system-derived (UAS-derived) photographs, and ground-based lidar, which students then used in courseassignments. The course was run over a 2-week period and had synchronousand asynchronous components. Students created SfM models that incorporatedpost-processed GNSS ground control points and created derivative SfM and TLSproducts, including classified point clouds and digital elevation models(DEMs). Students were successfully able to (1) evaluate the appropriatenessof a given survey/data approach given site conditions, (2) assess pros andcons of different data collection and post-processing methods in light offield and time constraints and limitations of each, (3) conduct error andgeomorphic change analysis, and (4) propose or implement a protocol to answera geomorphic question. Overall, our analysis indicates the course had asuccessful implementation that met student needs as well as course-specificand NAGT learning outcomes, with 91 % of students receiving an A, B, or Cgrade. Unexpected outcomes of the course included student self-reflectionand redirection and classmate support through a daily reflection anddiscussion post. Challenges included long hours in front of a computer,computing limitations, and burnout because of the condensed nature of thecourse. Recommended implementation improvements include spreading the courseout over a longer period of time or adopting only part of the course andproviding appropriate computers and technical assistance. This paperand published curricular materials should serve as an implementation andassessment guide for the geoscience community to use in virtual or in-personhigh-resolution topographic data courses that can be adapted for individuallabs or for an entire field or data course. 
    more » « less
  3. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively. 
    more » « less
  4. null (Ed.)

    We present a new method to improve the representational power of the features in Convolutional Neural Networks (CNNs). By studying traditional image processing methods and recent CNN architectures, we propose to use positional information in CNNs for effective exploration of feature dependencies. Rather than considering feature semantics alone, we incorporate spatial positions as an augmentation for feature semantics in our design. From this vantage, we present a Position-Aware Recalibration Module (PRM in short) which recalibrates features leveraging both feature semantics and position. Furthermore, inspired by multi-head attention, our module is capable of performing multiple recalibrations where results are concatenated as the output. As PRM is efficient and easy to implement, it can be seamlessly integrated into various base networks and applied to many position-aware visual tasks. Compared to original CNNs, our PRM introduces a negligible number of parameters and FLOPs, while yielding better performance. Experimental results on ImageNet and MS COCO benchmarks show that our approach surpasses related methods by a clear margin with less computational overhead. For example, we improve the ResNet50 by absolute 1.75% (77.65% vs. 75.90%) on ImageNet 2012 validation dataset, and 1.5%~1.9% mAP on MS COCO validation dataset with almost no computational overhead. Codes are made publicly available.

     
    more » « less
  5. In this paper, we present a model to obtain prior knowledge for organ localization in CT thorax images using three dimensional convolutional neural networks (3D CNNs). Specifically, we use the knowledge obtained from CNNs in a Bayesian detector to establish the presence and location of a given target organ defined within a spherical coordinate system. We train a CNN to perform a soft detection of the target organ potentially present at any point, x = [r,Θ,Φ]T. This probability outcome is used as a prior in a Bayesian model whose posterior probability serves to provide a more accurate solution to the target organ detection problem. The likelihoods for the Bayesian model are obtained by performing a spatial analysis of the organs in annotated training volumes. Thoracic CT images from the NSCLC–Radiomics dataset are used in our case study, which demonstrates the enhancement in robustness and accuracy of organ identification. The average value of the detector accuracies for the right lung, left lung, and heart were found to be 94.87%, 95.37%, and 90.76% after the CNN stage, respectively. Introduction of spatial relationship using a Bayes classifier improved the detector accuracies to 95.14%, 96.20%, and 95.15%, respectively, showing a marked improvement in heart detection. This workflow improves the detection rate since the decision is made employing both lower level features (edges, contour etc) and complex higher level features (spatial relationship between organs). This strategy also presents a new application to CNNs and a novel methodology to introduce higher level context features like spatial relationship between objects present at a different location in images to real world object detection problems. 
    more » « less