skip to main content

Search for: All records

Award ID contains: 1800768

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Flood events have become intense and more frequent due to heavy rainfall and hurricanes caused by global warming. Accurate floodwater extent maps are essential information sources for emergency management agencies and flood relief programs to direct their resources to the most affected areas. Synthetic Aperture Radar (SAR) data are superior to optical data for floodwater mapping, especially in vegetated areas and in forests that are adjacent to urban areas and critical infrastructures. Investigating floodwater mapping with various available SAR sensors and comparing their performance allows the identification of suitable SAR sensors that can be used to map inundated areas in different land covers, such as forests and vegetated areas. In this study, we investigated the performance of polarization configurations for flood boundary delineation in vegetated and open areas derived from Sentinel1b, C-band, and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band data collected during flood events resulting from Hurricane Florence in the eastern area of North Carolina. The datasets from the sensors for the flooding event collected on the same day and same study area were processed and classified for five landcover classes using a machine learning method—the Random Forest classification algorithm. We compared the classification results of linear, dual,more »and full polarizations of the SAR datasets. The L-band fully polarized data classification achieved the highest accuracy for flood mapping as the decomposition of fully polarized SAR data allows land cover features to be identified based on their scattering mechanisms.« less
    Free, publicly-accessible full text available December 1, 2023
  2. Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hrmore »to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event.« less
  3. Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gaugemore »water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively.« less
  4. Abstract. This research examines the ability of deep learning methods for remote sensing image classification for agriculture applications. U-net and convolutional neural networks are fine-tuned, utilized and tested for crop/weed classification. The dataset for this study includes 60 top-down images of an organic carrots field, which was collected by an autonomous vehicle and labeled by experts. FCN-8s model achieved 75.1% accuracy on detecting weeds compared to 66.72% of U-net using 60 training images. However, the U-net model performed better on detecting crops which is 60.48% compared to 47.86% of FCN-8s.