skip to main content


Title: Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data
High resolution mapping of coastal habitats is invaluable for resource inventory, change detection, and inventory of aquaculture applications. However, coastal areas, especially the interior of mangroves, are often difficult to access. An Unmanned Aerial Vehicle (UAV), equipped with a multispectral sensor, affords an opportunity to improve upon satellite imagery for coastal management because of the very high spatial resolution, multispectral capability, and opportunity to collect real-time observations. Despite the recent and rapid development of UAV mapping applications, few articles have quantitatively compared how much improvement there is of UAV multispectral mapping methods compared to more conventional remote sensing data such as satellite imagery. The objective of this paper is to quantitatively demonstrate the improvements of a multispectral UAV mapping technique for higher resolution images used for advanced mapping and assessing coastal land cover. We performed multispectral UAV mapping fieldwork trials over Indian River Lagoon along the central Atlantic coast of Florida. Ground Control Points (GCPs) were collected to generate a rigorous geo-referenced dataset of UAV imagery and support comparison to geo-referenced satellite and aerial imagery. Multi-spectral satellite imagery (Sentinel-2) was also acquired to map land cover for the same region. NDVI and object-oriented classification methods were used for comparison between UAV and satellite mapping capabilities. Compared with aerial images acquired from Florida Department of Environmental Protection, the UAV multi-spectral mapping method used in this study provided advanced information of the physical conditions of the study area, an improved land feature delineation, and a significantly better mapping product than satellite imagery with coarser resolution. The study demonstrates a replicable UAV multi-spectral mapping method useful for study sites that lack high quality data.  more » « less
Award ID(s):
1829890
NSF-PAR ID:
10111963
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Drones
Volume:
3
Issue:
3
ISSN:
2504-446X
Page Range / eLocation ID:
60
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. State-of-the-art deep learning technology has been successfully applied to relatively small selected areas of very high spatial resolution (0.15 and 0.25 m) optical aerial imagery acquired by a fixed-wing aircraft to automatically characterize ice-wedge polygons (IWPs) in the Arctic tundra. However, any mapping of IWPs at regional to continental scales requires images acquired on different sensor platforms (particularly satellite) and a refined understanding of the performance stability of the method across sensor platforms through reliable evaluation assessments. In this study, we examined the transferability of a deep learning Mask Region-Based Convolutional Neural Network (R-CNN) model for mapping IWPs in satellite remote sensing imagery (~0.5 m) covering 272 km2 and unmanned aerial vehicle (UAV) (0.02 m) imagery covering 0.32 km2. Multi-spectral images were obtained from the WorldView-2 satellite sensor and pan-sharpened to ~0.5 m, and a 20 mp CMOS sensor camera onboard a UAV, respectively. The training dataset included 25,489 and 6022 manually delineated IWPs from satellite and fixed-wing aircraft aerial imagery near the Arctic Coastal Plain, northern Alaska. Quantitative assessments showed that individual IWPs were correctly detected at up to 72% and 70%, and delineated at up to 73% and 68% F1 score accuracy levels for satellite and UAV images, respectively. Expert-based qualitative assessments showed that IWPs were correctly detected at good (40–60%) and excellent (80–100%) accuracy levels for satellite and UAV images, respectively, and delineated at excellent (80–100%) level for both images. We found that (1) regardless of spatial resolution and spectral bands, the deep learning Mask R-CNN model effectively mapped IWPs in both remote sensing satellite and UAV images; (2) the model achieved a better accuracy in detection with finer image resolution, such as UAV imagery, yet a better accuracy in delineation with coarser image resolution, such as satellite imagery; (3) increasing the number of training data with different resolutions between the training and actual application imagery does not necessarily result in better performance of the Mask R-CNN in IWPs mapping; (4) and overall, the model underestimates the total number of IWPs particularly in terms of disjoint/incomplete IWPs. 
    more » « less
  2. Over the last century, direct human modification has been a major driver of coastal wetland degradation, resulting in widespread losses of wetland vegetation and a transition to open water. High-resolution satellite imagery is widely available for monitoring changes in present-day wetlands; however, understanding the rates of wetland vegetation loss over the last century depends on the use of historical panchromatic aerial photographs. In this study, we compared manual image thresholding and an automated machine learning (ML) method in detecting wetland vegetation and open water from historical panchromatic photographs in the Florida Everglades, a subtropical wetland landscape. We compared the same classes delineated in the historical photographs to 2012 multispectral satellite imagery and assessed the accuracy of detecting vegetation loss over a 72 year timescale (1940 to 2012) for a range of minimum mapping units (MMUs). Overall, classification accuracies were >95% across the historical photographs and satellite imagery, regardless of the classification method and MMUs. We detected a 2.3–2.7 ha increase in open water pixels across all change maps (overall accuracies > 95%). Our analysis demonstrated that ML classification methods can be used to delineate wetland vegetation from open water in low-quality, panchromatic aerial photographs and that a combination of images with different resolutions is compatible with change detection. The study also highlights how evaluating a range of MMUs can identify the effect of scale on detection accuracy and change class estimates as well as in determining the most relevant scale of analysis for the process of interest. 
    more » « less
  3. Arctic landscapes are rapidly changing with climate warming. Vegetation communities are restructuring, which in turn impacts wildlife, permafrost, carbon cycling and climate feedbacks. Accurately monitoring vegetation change is thus crucial, but notable mismatches in scale occur between current field and satellite-based monitoring. Remote sensing from unmanned aerial vehicles (UAVs) has emerged as a bridge between field data and satellite imagery mapping. In this work we assess the viability of using high resolution UAV imagery (RGB and multispectral), along with UAV derived Structure from Motion (SfM) to predict cover, height and above-ground biomass of common Arctic plant functional types (PFTs) across a wide range of vegetation community types. We collected field data and UAV imagery from 45 sites across Alaska and northwest Canada. We then classified UAV imagery by PFT, estimated cover and height, and modeled biomass from UAV-derived volume estimates. Here we present datasets summarizing this data. 
    more » « less
  4. Deep learning (DL) convolutional neural networks (CNNs) have been rapidly adapted in very high spatial resolution (VHSR) satellite image analysis. DLCNN-based computer visions (CV) applications primarily aim for everyday object detection from standard red, green, blue (RGB) imagery, while earth science remote sensing applications focus on geo object detection and classification from multispectral (MS) imagery. MS imagery includes RGB and narrow spectral channels from near- and/or middle-infrared regions of reflectance spectra. The central objective of this exploratory study is to understand to what degree MS band statistics govern DLCNN model predictions. We scaffold our analysis on a case study that uses Arctic tundra permafrost landform features called ice-wedge polygons (IWPs) as candidate geo objects. We choose Mask RCNN as the DLCNN architecture to detect IWPs from eight-band Worldview-02 VHSR satellite imagery. A systematic experiment was designed to understand the impact on choosing the optimal three-band combination in model prediction. We tasked five cohorts of three-band combinations coupled with statistical measures to gauge the spectral variability of input MS bands. The candidate scenes produced high model detection accuracies for the F1 score, ranging between 0.89 to 0.95, for two different band combinations (coastal blue, blue, green (1,2,3) and green, yellow, red (3,4,5)). The mapping workflow discerned the IWPs by exhibiting low random and systematic error in the order of 0.17–0.19 and 0.20–0.21, respectively, for band combinations (1,2,3). Results suggest that the prediction accuracy of the Mask-RCNN model is significantly influenced by the input MS bands. Overall, our findings accentuate the importance of considering the image statistics of input MS bands and careful selection of optimal bands for DLCNN predictions when DLCNN architectures are restricted to three spectral channels. 
    more » « less
  5. The use of multispectral geostationary satellites to study aquatic ecosystems improves the temporal frequency of observations and mitigates cloud obstruction, but no operational capability presently exists for the coastal and inland waters of the United States. The Advanced Baseline Imager (ABI) on the current iteration of the Geostationary Operational Environmental Satellites, termed theRSeries (GOES-R), however, provides sub-hourly imagery and the opportunity to overcome this deficit and to leverage a large repository of existing GOES-R aquatic observations. The fulfillment of this opportunity is assessed herein using a spectrally simplified, two-channel aquatic algorithm consistent with ABI wave bands to estimate the diffuse attenuation coefficient for photosynthetically available radiation,Kd(PAR). First, anin situABI dataset was synthesized using a globally representative dataset of above- and in-water radiometric data products. Values ofKd(PAR)were estimated by fitting the ratio of the shortest and longest visible wave bands from thein situABI dataset to coincident,in situKd(PAR)data products. The algorithm was evaluated based on an iterative cross-validation analysis in which 80% of the dataset was randomly partitioned for fitting and the remaining 20% was used for validation. The iteration producing the median coefficient of determination (R2) value (0.88) resulted in a root mean square difference of0.319m−<#comment/>1, or 8.5% of the range in the validation dataset. Second, coincident mid-day images of central and southern California from ABI and from the Moderate Resolution Imaging Spectroradiometer (MODIS) were compared using Google Earth Engine (GEE). GEE default ABI reflectance values were adjusted based on a near infrared signal. Matchups between the ABI and MODIS imagery indicated similar spatial variability (R2=0.60) between ABI adjusted blue-to-red reflectance ratio values and MODIS default diffuse attenuation coefficient for spectral downward irradiance at 490 nm,Kd(490), values. This work demonstrates that if an operational capability to provide ABI aquatic data products was realized, the spectral configuration of ABI would potentially support a sub-hourly, visible aquatic data product that is applicable to water-mass tracing and physical oceanography research.

     
    more » « less