- Award ID(s):
- 1800768
- NSF-PAR ID:
- 10409067
- Date Published:
- Journal Name:
- Frontiers in Remote Sensing
- Volume:
- 3
- ISSN:
- 2673-6187
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.more » « less
-
null (Ed.)Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gauge water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively.more » « less
-
Deep learning algorithms are exceptionally valuable tools for collecting and analyzing the catastrophic readiness and countless actionable flood data. Convolutional neural networks (CNNs) are one form of deep learning algorithms widely used in computer vision which can be used to study flood images and assign learnable weights to various objects in the image. Here, we leveraged and discussed how connected vision systems can be used to embed cameras, image processing, CNNs, and data connectivity capabilities for flood label detection. We built a training database service of >9000 images (image annotation service) including the image geolocation information by streaming relevant images from social media platforms, Department of Transportation (DOT) 511 traffic cameras, the US Geological Survey (USGS) live river cameras, and images downloaded from search engines. We then developed a new python package called “FloodImageClassifier” to classify and detect objects within the collected flood images. “FloodImageClassifier” includes various CNNs architectures such as YOLOv3 (You look only once version 3), Fast R–CNN (Region-based CNN), Mask R–CNN, SSD MobileNet (Single Shot MultiBox Detector MobileNet), and EfficientDet (Efficient Object Detection) to perform both object detection and segmentation simultaneously. Canny Edge Detection and aspect ratio concepts are also included in the package for flood water level estimation and classification. The pipeline is smartly designed to train a large number of images and calculate flood water levels and inundation areas which can be used to identify flood depth, severity, and risk. “FloodImageClassifier” can be embedded with the USGS live river cameras and 511 traffic cameras to monitor river and road flooding conditions and provide early intelligence to emergency response authorities in real-time.more » « less
-
Abstract Flood modeling provides inundation estimates and improves disaster preparedness and response. Recent development in hydrologic modeling and inundation mapping enables the creation of such estimates in near real time. To quantify their performance, these estimates need to be compared to measurements collected during historical events. We present an application of a flood mapping system based on the National Water Model and the Height Above Nearest Drainage method to Hurricane Harvey. The outputs are validated with high‐water marks collected to record the highest water levels during the flood. We use these points to compute elevation‐related variables and flood extents and measure the quality of the estimates. To improve the performance of the method, we calibrate the roughness coefficient based on stream order. We also use lidar data with a workflow named GeoFlood and we compare the modeled inundation to that recorded by the high‐water marks and to the maximum inundation extent provided by the Dartmouth Flood Observatory based on remotely sensed data from multiple sources. The results show that our mapping system estimates local water depth with a mean error of about 0.5 m and that the inundation extent covers over 90% of that derived from high‐water marks. Using a calibrated roughness coefficient and lidar data reduces the mean error in flood depth but does not affect as much the inundation extent estimation.
-
The identification of flood hazards during emerging public safety crises such as hurricanes or flash floods is an invaluable tool for first responders and managers yet remains out of reach in any comprehensive sense when using traditional remote-sensing methods, due to cloud cover and other data-sourcing restrictions. While many remote-sensing techniques exist for floodwater identification and extraction, few studies demonstrate an up-to-day understanding with better techniques in isolating the spectral properties of floodwaters from collected data, which vary for each event. This study introduces a novel method for delineating near-real-time inundation flood extent and depth mapping for storm events, using an inexpensive unmanned aerial vehicle (UAV)-based multispectral remote-sensing platform, which was designed to be applicable for urban environments, under a wide range of atmospheric conditions. The methodology is demonstrated using an actual flooding-event—Hurricane Zeta during the 2020 Atlantic hurricane season. Referred to as the UAV and Floodwater Inundation and Depth Mapper (FIDM), the methodology consists of three major components, including aerial data collection, processing, and flood inundation (water surface extent) and depth mapping. The model results for inundation and depth were compared to a validation dataset and ground-truthing data, respectively. The results suggest that UAV-FIDM is able to predict inundation with a total error (sum of omission and commission errors) of 15.8% and produce flooding depth estimates that are accurate enough to be actionable to determine road closures for a real event.