skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Enhanced Deep Learning Super-Resolution for Bathymetry Data
Spatial resolution is critical for observing and monitoring environmental phenomena. Acquiring high-resolution bathymetry data directly from satellites is not always feasible due to limitations on equipment, so spatial data scientists and researchers turn to single image super-resolution (SISR) methods that utilize deep learning techniques as an alternative method to increase pixel density. While super resolution residual networks (e.g., SR-ResNet) are promising for this purpose, several challenges still need to be addressed: (1) Earth data such as bathymetry is expensive to obtain and relatively limited in its data record amount; (2) certain domain knowledge needs to be complied with during model training; (3) certain areas of interest require more accurate measurements than other areas. To address these challenges, following the transfer learning principle, we study how to leverage an existing pre-trained super-resolution deep learning model, namely SR-ResNet, for high-resolution bathymetry data generation. We further enhance the SR-ResNet model to add corresponding loss functions based on domain knowledge. To let the model perform better for certain spatial areas, we add additional loss functions to increase the penalty of the areas of interest. Our experiments show our approaches achieve higher accuracy than most baseline models when evaluating using metrics including MSE, PSNR, and SSIM.  more » « less
Award ID(s):
1942714 2118285
PAR ID:
10400900
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2022 IEEE/ACM 9th International Conference on Big Data Computing, Applications and Technologies (BDCAT 2022)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Supervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning. 
    more » « less
  2. We propose STSRNet, a joint space-time super-resolution deep learning based model for time-varying vector field data. Our method is designed to reconstruct high temporal resolution (HTR) and high spatial resolution (HSR) vector fields sequence from the corresponding low-resolution key frames. For large scale simulations, only data from a subset of time steps with reduced spatial resolution can be stored for post-hoc analysis. In this paper, we leverage a deep learning model to capture the non-linear complex changes of vector field data with a two-stage architecture: the first stage deforms a pair of low spatial resolution (LSR) key frames forward and backward to generate the intermediate LSR frames, and the second stage performs spatial super-resolution to output the high-resolution sequence. Our method is scalable and can handle different data sets. We demonstrate the effectiveness of our framework with several data sets through quantitative and qualitative evaluations. 
    more » « less
  3. Flood mapping on Earth imagery is crucial for disaster management, but its efficacy is hampered by the lack of high-quality training labels. Given high-resolution Earth imagery with coarse and noisy training labels, a base deep neural network model, and a spatial knowledge base with label constraints, our problem is to infer the true high-resolution labels while training neural network parameters. Traditional methods are largely based on specific physical properties and thus fall short of capturing the rich domain constraints expressed by symbolic logic. Neural-symbolic models can capture rich domain knowledge, but existing methods do not address the unique spatial challenges inherent in flood mapping on high-resolution imagery. To fill this gap, we propose a spatial-logic-aware weakly supervised learning framework. Our framework integrates symbolic spatial logic inference into probabilistic learning in a weakly supervised setting. To reduce the time costs of logic inference on vast high-resolution pixels, we propose a multi-resolution spatial reasoning algorithm to infer true labels while training neural network parameters. Evaluations of real-world flood datasets show that our model outperforms several baselines in prediction accuracy. The code is available at https://github.com/spatialdatasciencegroup/SLWSL. 
    more » « less
  4. Abstract—Numerical simulation of weather is resolution-constrained due to the high computational cost of integrating the coupled PDEs that govern atmospheric motion. For example, the most highly-resolved numerical weather prediction models are limited to approximately 3 km. However many weather and climate impacts occur over much finer scales, especially in urban areas and regions with high topographic complexity like mountains or coastal regions. Thus several statistical methods have been developed in the climate community to downscale numerical model output to finer resolutions. This is conceptually similar to image super-resolution (SR) [1] and in this work we report the results of applying SR methods to the downscaling problem. In particular we test the extent to which a SR method based on a Generative Adversarial Network (GAN) can recover a grid of wind speed from an artificially downsampled version, compared against a standard bicubic upsampling approach and another machine learning based approach, SR-CNN [1]. We use ESRGAN ([2]) to learn to downscale wind speeds by a factor of 4 from a coarse grid. We find that we can recover spatial details with higher fidelity than bicubic upsampling or SR-CNN. The bicubic and SR-CNN methods perform better than ESRGAN on coarse metrics such as MSE. However, the high frequency power spectrum is captured remarkably well by the ESRGAN, virtually identical to the real data, while bicubic and SR-CNN fidelity drops significantly at high frequency. This indicates that SR is considerably better at matching the higher-order statistics of the dataset, consistent with the observation that the generated images are of superior visual quality compared with SR-CNN. 
    more » « less
  5. Deep neural networks have been shown to be effective adaptive beamformers for ultrasound imaging. However, when training with traditional L p norm loss functions, model selection is difficult because lower loss values are not always associated with higher image quality. This ultimately limits the maximum achievable image quality with this approach and raises concerns about the optimization objective. In an effort to align the optimization objective with the image quality metrics of interest, we implemented a novel ultrasound-specific loss function based on the spatial lag-one coherence and signal-to-noise ratio of the delayed channel data in the short-time Fourier domain. We employed the R-Adam optimizer with look ahead and cyclical learning rate to make the training more robust to initialization and local minima, leading to better model performance and more reliable convergence. With our custom loss function and optimization scheme, we achieved higher contrast-to-noise-ratio, higher speckle signal-to-noise-ratio, and more accurate contrast ratio reconstruction than with previous deep learning and delay-and-sum beamforming approaches. 
    more » « less