skip to main content


Title: 3D Inundation Mapping: A Comparison Between Deep Learning Image Classification and Geomorphic Flood Index Approaches
Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hr to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event.  more » « less
Award ID(s):
1800768
NSF-PAR ID:
10409067
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Frontiers in Remote Sensing
Volume:
3
ISSN:
2673-6187
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event. 
    more » « less
  2. null (Ed.)
    Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gauge water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively. 
    more » « less
  3. Deep learning algorithms are exceptionally valuable tools for collecting and analyzing the catastrophic readiness and countless actionable flood data. Convolutional neural networks (CNNs) are one form of deep learning algorithms widely used in computer vision which can be used to study flood images and assign learnable weights to various objects in the image. Here, we leveraged and discussed how connected vision systems can be used to embed cameras, image processing, CNNs, and data connectivity capabilities for flood label detection. We built a training database service of >9000 images (image annotation service) including the image geolocation information by streaming relevant images from social media platforms, Department of Transportation (DOT) 511 traffic cameras, the US Geological Survey (USGS) live river cameras, and images downloaded from search engines. We then developed a new python package called “FloodImageClassifier” to classify and detect objects within the collected flood images. “FloodImageClassifier” includes various CNNs architectures such as YOLOv3 (You look only once version 3), Fast R–CNN (Region-based CNN), Mask R–CNN, SSD MobileNet (Single Shot MultiBox Detector MobileNet), and EfficientDet (Efficient Object Detection) to perform both object detection and segmentation simultaneously. Canny Edge Detection and aspect ratio concepts are also included in the package for flood water level estimation and classification. The pipeline is smartly designed to train a large number of images and calculate flood water levels and inundation areas which can be used to identify flood depth, severity, and risk. “FloodImageClassifier” can be embedded with the USGS live river cameras and 511 traffic cameras to monitor river and road flooding conditions and provide early intelligence to emergency response authorities in real-time. 
    more » « less
  4. The identification of flood hazards during emerging public safety crises such as hurricanes or flash floods is an invaluable tool for first responders and managers yet remains out of reach in any comprehensive sense when using traditional remote-sensing methods, due to cloud cover and other data-sourcing restrictions. While many remote-sensing techniques exist for floodwater identification and extraction, few studies demonstrate an up-to-day understanding with better techniques in isolating the spectral properties of floodwaters from collected data, which vary for each event. This study introduces a novel method for delineating near-real-time inundation flood extent and depth mapping for storm events, using an inexpensive unmanned aerial vehicle (UAV)-based multispectral remote-sensing platform, which was designed to be applicable for urban environments, under a wide range of atmospheric conditions. The methodology is demonstrated using an actual flooding-event—Hurricane Zeta during the 2020 Atlantic hurricane season. Referred to as the UAV and Floodwater Inundation and Depth Mapper (FIDM), the methodology consists of three major components, including aerial data collection, processing, and flood inundation (water surface extent) and depth mapping. The model results for inundation and depth were compared to a validation dataset and ground-truthing data, respectively. The results suggest that UAV-FIDM is able to predict inundation with a total error (sum of omission and commission errors) of 15.8% and produce flooding depth estimates that are accurate enough to be actionable to determine road closures for a real event.

     
    more » « less
  5. Research in different agricultural sectors, including in crop loss estimation during flood and yield estimation, substantially rely on inundation information. Spaceborne remote sensing has widely been used in the mapping and monitoring of floods. However, the inability of optical remote sensing to cloud penetration and the scarcity of fine temporal resolution SAR data hinder the application of flood mapping in many cases. Soil Moisture Active Passive (SMAP) level 4 products, which are model-driven soil moisture data derived from SMAP observations and are available at 3-h intervals, can offer an intermediate but effective solution. This study maps flood progress in croplands by incorporating SMAP surface soil moisture, soil physical properties, and national floodplain information. Soil moisture above the effective soil porosity is a direct indication of soil saturation. Soil moisture also increases considerably during a flood event. Therefore, this approach took into account three conditions to map the flooded pixels: a minimum of 0.05 m3m−3 increment in soil moisture from pre-flood to post-flood condition, soil moisture above the effective soil porosity, and the holding of saturation condition for the 72 consecutive hours. Results indicated that the SMAP-derived maps were able to successfully map most of the flooded areas in the reference maps in the majority of the cases, though with some degree of overestimation (due to the coarse spatial resolution of SMAP). Finally, the inundated croplands are extracted from saturated areas by Spatial Hazard Zone areas (SHFA) of Federal Emergency Management Agency (FEMA) and cropland data layer (CDL). The flood maps extracted from SMAP data are validated with FEMA-declared affected counties as well as with flood maps from other sources. 
    more » « less