skip to main content


Title: A Systematic Review on Case Studies of Remote-Sensing-Based Flood Crop Loss Assessment
This article reviews case studies which have used remote sensing data for different aspects of flood crop loss assessment. The review systematically finds a total of 62 empirical case studies from the past three decades. The number of case studies has recently been increased because of increased availability of remote sensing data. In the past, flood crop loss assessment was very generalized and time-intensive because of the dependency on the survey-based data collection. Remote sensing data availability makes rapid flood loss assessment possible. This study groups flood crop loss assessment approaches into three broad categories: flood-intensity-based approach, crop-condition-based approach, and a hybrid approach of the two. Flood crop damage assessment is more precise when both flood information and crop condition are incorporated in damage assessment models. This review discusses the strengths and weaknesses of different loss assessment approaches. Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat are the dominant sources of optical remote sensing data for flood crop loss assessment. Remote-sensing-based vegetation indices (VIs) have significantly been utilized for crop damage assessments in recent years. Many case studies also relied on microwave remote sensing data, because of the inability of optical remote sensing to see through clouds. Recent free-of-charge availability of synthetic-aperture radar (SAR) data from Sentinel-1 will advance flood crop damage assessment. Data for the validation of loss assessment models are scarce. Recent advancements of data archiving and distribution through web technologies will be helpful for loss assessment and validation.  more » « less
Award ID(s):
1739705
NSF-PAR ID:
10194019
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Agriculture
Volume:
10
Issue:
4
ISSN:
2077-0472
Page Range / eLocation ID:
131
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively. 
    more » « less
  2. Research in different agricultural sectors, including in crop loss estimation during flood and yield estimation, substantially rely on inundation information. Spaceborne remote sensing has widely been used in the mapping and monitoring of floods. However, the inability of optical remote sensing to cloud penetration and the scarcity of fine temporal resolution SAR data hinder the application of flood mapping in many cases. Soil Moisture Active Passive (SMAP) level 4 products, which are model-driven soil moisture data derived from SMAP observations and are available at 3-h intervals, can offer an intermediate but effective solution. This study maps flood progress in croplands by incorporating SMAP surface soil moisture, soil physical properties, and national floodplain information. Soil moisture above the effective soil porosity is a direct indication of soil saturation. Soil moisture also increases considerably during a flood event. Therefore, this approach took into account three conditions to map the flooded pixels: a minimum of 0.05 m3m−3 increment in soil moisture from pre-flood to post-flood condition, soil moisture above the effective soil porosity, and the holding of saturation condition for the 72 consecutive hours. Results indicated that the SMAP-derived maps were able to successfully map most of the flooded areas in the reference maps in the majority of the cases, though with some degree of overestimation (due to the coarse spatial resolution of SMAP). Finally, the inundated croplands are extracted from saturated areas by Spatial Hazard Zone areas (SHFA) of Federal Emergency Management Agency (FEMA) and cropland data layer (CDL). The flood maps extracted from SMAP data are validated with FEMA-declared affected counties as well as with flood maps from other sources. 
    more » « less
  3. Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hr to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event. 
    more » « less
  4. null (Ed.)
    Urban flooding is a major natural disaster that poses a serious threat to the urban environment. It is highly demanded that the flood extent can be mapped in near real-time for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Many efforts have been taken to identify the flooding zones with remote sensing data and image processing techniques. Unfortunately, the near real-time production of accurate flood maps over impacted urban areas has not been well investigated due to three major issues. (1) Satellite imagery with high spatial resolution over urban areas usually has nonhomogeneous background due to different types of objects such as buildings, moving vehicles, and road networks. As such, classical machine learning approaches hardly can model the spatial relationship between sample pixels in the flooding area. (2) Handcrafted features associated with the data are usually required as input for conventional flood mapping models, which may not be able to fully utilize the underlying patterns of a large number of available data. (3) High-resolution optical imagery often has varied pixel digital numbers (DNs) for the same ground objects as a result of highly inconsistent illumination conditions during a flood. Accordingly, traditional methods of flood mapping have major limitations in generalization based on testing data. To address the aforementioned issues in urban flood mapping, we developed a patch similarity convolutional neural network (PSNet) using satellite multispectral surface reflectance imagery before and after flooding with a spatial resolution of 3 meters. We used spectral reflectance instead of raw pixel DNs so that the influence of inconsistent illumination caused by varied weather conditions at the time of data collection can be greatly reduced. Such consistent spectral reflectance data also enhance the generalization capability of the proposed model. Experiments on the high resolution imagery before and after the urban flooding events (i.e., the 2017 Hurricane Harvey and the 2018 Hurricane Florence) showed that the developed PSNet can produce urban flood maps with consistently high precision, recall, F1 score, and overall accuracy compared with baseline classification models including support vector machine, decision tree, random forest, and AdaBoost, which were often poor in either precision or recall. The study paves the way to fuse bi-temporal remote sensing images for near real-time precision damage mapping associated with other types of natural hazards (e.g., wildfires and earthquakes). 
    more » « less
  5. Surface-canopy forming kelps provide the foundation for ecosystems that are ecologically, culturally, and economically important. However, these kelp forests are naturally dynamic systems that are also threatened by a range of global and local pressures. As a result, there is a need for tools that enable managers to reliably track changes in their distribution, abundance, and health in a timely manner. Remote sensing data availability has increased dramatically in recent years and this data represents a valuable tool for monitoring surface-canopy forming kelps. However, the choice of remote sensing data and analytic approach must be properly matched to management objectives and tailored to the physical and biological characteristics of the region of interest. This review identifies remote sensing datasets and analyses best suited to address different management needs and environmental settings using case studies from the west coast of North America. We highlight the importance of integrating different datasets and approaches to facilitate comparisons across regions and promote coordination of management strategies. 
    more » « less