skip to main content


Title: Fast high-fidelity flood inundation map generation by super-resolution techniques
Abstract

Flooding is one of the most frequent natural hazards and causes more economic loss than all the other natural hazards. Fast and accurate flood prediction has significance in preserving lives, minimizing economic damage, and reducing public health risks. However, current methods cannot achieve speed and accuracy simultaneously. Numerical methods can provide high-fidelity results, but they are time-consuming, particularly when pursuing high accuracy. Conversely, neural networks can provide results in a matter of seconds, but they have shown low accuracy in flood map generation by all existing methods. This work combines the strengths of numerical methods and neural networks and builds a framework that can quickly and accurately model the high-fidelity flood inundation map with detailed water depth information. In this paper, we employ the U-Net and generative adversarial network (GAN) models to recover the lost physics and information from ultra-fast, low-resolution numerical simulations, ultimately presenting high-resolution, high-fidelity flood maps as the end results. In this study, both the U-Net and GAN models have proven their ability to reduce the computation time for generating high-fidelity results, reducing it from 7–8 h down to 1 min. Furthermore, the accuracy of both models is notably high.

 
more » « less
NSF-PAR ID:
10483838
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
DOI PREFIX: 10.2166
Date Published:
Journal Name:
Journal of Hydroinformatics
Volume:
26
Issue:
1
ISSN:
1464-7141
Format(s):
Medium: X Size: p. 319-336
Size(s):
["p. 319-336"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  2. null (Ed.)
    Urban flooding is a major natural disaster that poses a serious threat to the urban environment. It is highly demanded that the flood extent can be mapped in near real-time for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Many efforts have been taken to identify the flooding zones with remote sensing data and image processing techniques. Unfortunately, the near real-time production of accurate flood maps over impacted urban areas has not been well investigated due to three major issues. (1) Satellite imagery with high spatial resolution over urban areas usually has nonhomogeneous background due to different types of objects such as buildings, moving vehicles, and road networks. As such, classical machine learning approaches hardly can model the spatial relationship between sample pixels in the flooding area. (2) Handcrafted features associated with the data are usually required as input for conventional flood mapping models, which may not be able to fully utilize the underlying patterns of a large number of available data. (3) High-resolution optical imagery often has varied pixel digital numbers (DNs) for the same ground objects as a result of highly inconsistent illumination conditions during a flood. Accordingly, traditional methods of flood mapping have major limitations in generalization based on testing data. To address the aforementioned issues in urban flood mapping, we developed a patch similarity convolutional neural network (PSNet) using satellite multispectral surface reflectance imagery before and after flooding with a spatial resolution of 3 meters. We used spectral reflectance instead of raw pixel DNs so that the influence of inconsistent illumination caused by varied weather conditions at the time of data collection can be greatly reduced. Such consistent spectral reflectance data also enhance the generalization capability of the proposed model. Experiments on the high resolution imagery before and after the urban flooding events (i.e., the 2017 Hurricane Harvey and the 2018 Hurricane Florence) showed that the developed PSNet can produce urban flood maps with consistently high precision, recall, F1 score, and overall accuracy compared with baseline classification models including support vector machine, decision tree, random forest, and AdaBoost, which were often poor in either precision or recall. The study paves the way to fuse bi-temporal remote sensing images for near real-time precision damage mapping associated with other types of natural hazards (e.g., wildfires and earthquakes). 
    more » « less
  3. Pancreatic ductal adenocarcinoma (PDAC) presents a critical global health challenge, and early detection is crucial for improving the 5-year survival rate. Recent medical imaging and computational algorithm advances offer potential solutions for early diagnosis. Deep learning, particularly in the form of convolutional neural networks (CNNs), has demonstrated success in medical image analysis tasks, including classification and segmentation. However, the limited availability of clinical data for training purposes continues to represent a significant obstacle. Data augmentation, generative adversarial networks (GANs), and cross-validation are potential techniques to address this limitation and improve model performance, but effective solutions are still rare for 3D PDAC, where the contrast is especially poor, owing to the high heterogeneity in both tumor and background tissues. In this study, we developed a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue, which can generate the inter-slice connection data that the existing 2D CT image synthesis models lack. The transition to 3D models allowed the preservation of contextual information from adjacent slices, improving efficiency and accuracy, especially for the poor-contrast challenging case of PDAC. PDAC’s challenging characteristics, such as an iso-attenuating or hypodense appearance and lack of well-defined margins, make tumor shape and texture learning challenging. To overcome these challenges and improve the performance of 3D GAN models, our innovation was to develop a 3D U-Net architecture for the generator, to improve shape and texture learning for PDAC tumors and pancreatic tissue. Thorough examination and validation across many datasets were conducted on the developed 3D GAN model, to ascertain the efficacy and applicability of the model in clinical contexts. Our approach offers a promising path for tackling the urgent requirement for creative and synergistic methods to combat PDAC. The development of this GAN-based model has the potential to alleviate data scarcity issues, elevate the quality of synthesized data, and thereby facilitate the progression of deep learning models, to enhance the accuracy and early detection of PDAC tumors, which could profoundly impact patient outcomes. Furthermore, the model has the potential to be adapted to other types of solid tumors, hence making significant contributions to the field of medical imaging in terms of image processing models.

     
    more » « less
  4. Abstract

    Blood velocity and red blood cell (RBC) distribution profiles in a capillary vessel cross-section in the microcirculation are generally complex and do not follow Poiseuille’s parabolic or uniform pattern. Existing imaging techniques used to map large microvascular networks in vivo do not allow a direct measurement of full 3D velocity and RBC concentration profiles, although such information is needed for accurate evaluation of the physiological variables, such as the wall shear stress (WSS) and near-wall cell-free layer (CFL), that play critical roles in blood flow regulation, disease progression, angiogenesis, and hemostasis. Theoretical network flow models, often used for hemodynamic predictions in experimentally acquired images of the microvascular network, cannot provide the full 3D profiles either. In contrast, such information can be readily obtained from high-fidelity computational models that treat blood as a suspension of deformable RBCs. These models, however, are computationally expensive and not feasible for extension to the microvascular network at large spatial scales up to an organ level. To overcome such limitations, here we present machine learning (ML) models that bypass such expensive computations but provide highly accurate and full 3D profiles of the blood velocity, RBC concentration, WSS, and CFL in every vessel in the microvascular network. The ML models, which are based on artificial neural network and convolution-based U-net models, predict hemodynamic quantities that compare very well against the true data but reduce the prediction time by several orders. This study therefore paves the way for ML to make detailed and accurate hemodynamic predictions in spatially large microvascular networks at an organ-scale.

     
    more » « less
  5. null (Ed.)
    Among all the natural hazards throughout the world, floods occur most frequently. They occur in high latitude regions, such as: 82% of the area of North America; most of Russia; Norway, Finland, and Sweden in North Europe; China and Japan in Asia. River flooding due to ice jams may happen during the spring breakup season. The Northeast and North Central region, and some areas of the western United States, are especially harmed by floods due to ice jams and snowmelt. In this study, observations from operational satellites are used to map and monitor floods due to ice jams and snowmelt. For a coarse-to-moderate resolution sensor on board the operational satellites, like the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the National Polar-orbiting Partnership (NPP) and the Joint Polar Satellite System (JPSS) series, and the Advanced Baseline Imager (ABI) on board the GOES-R series, a pixel is usually composed of a mix of water and land. Water fraction can provide more information and can be estimated through mixed-pixel decomposition. The flood map can be derived from the water fraction difference after and before flooding. In high latitude areas, while conventional observations are usually sparse, multiple observations can be available from polar-orbiting satellites during a single day, and river forecasters can observe ice movement, snowmelt status and flood water evolution from satellite-based flood maps, which is very helpful in ice jam determination and flood prediction. The high temporal resolution of geostationary satellite imagery, like that of the ABI, can provide the greatest extent of flood signals, and multi-day composite flood products from higher spatial resolution imagery, such as VIIRS, can pinpoint areas of interest to uncover more details. One unique feature of our JPSS and GOES-R flood products is that they include not only normal flood type, but also a special flood type as the supra-snow/ice flood, and moreover, snow and ice masks. Following the demonstrations in this study, it is expected that the JPSS and GOES-R flood products, with ice and snow information, can allow dynamic monitoring and prediction of floods due to ice jams and snowmelt for wide-end users. 
    more » « less