skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal Solution Characterization for Computational Imaging
Computational image reconstruction algorithms generally produce a single image without any measure of uncertainty or confidence. Regularized Maximum Likelihood (RML) and feed-forward deep learning approaches for inverse problems typically focus on recovering a point estimate. This is a serious limitation when working with under-determined imaging systems, where it is conceivable that multiple image modes would be consistent with the measured data. Characterizing the space of probable images that explain the observational data is therefore crucial. In this paper, we propose a variational deep probabilistic imaging approach to quantify reconstruction uncertainty. Deep Probabilistic Imaging (DPI) employs an untrained deep generative model to estimate a posterior distribution of an unobserved image. This approach does not require any training data; instead, it optimizes the weights of a neural network to generate image samples that fit a particular measurement dataset. Once the network weights have been learned, the posterior distribution can be efficiently sampled. We demonstrate this approach in the context of interferometric radio imaging, which is used for black hole imaging with the Event Horizon Telescope, and compressed sensing Magnetic Resonance Imaging (MRI).  more » « less
Award ID(s):
1935980
PAR ID:
10437836
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
Volume:
35
Issue:
3
ISSN:
2159-5399
Page Range / eLocation ID:
2628 to 2637
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we study the deep image prior (DIP) for reconstruction problems in magnetic resonance imaging (MRI). DIP has become a popular approach for image reconstruction, where it recovers the clear image by fitting an overparameterized convolutional neural network (CNN) to the corrupted/undersampled measurements. To improve the performance of DIP, recent work shows that using a reference image as an input often leads to improved reconstruction results compared to vanilla DIP with random input. However, obtaining the reference input image often requires supervision and hence is difficult in practice. In this work, we propose a self-guided reconstruction scheme that uses no training data other than the set of undersampled measurements to simultaneously estimate the network weights and input (reference). We introduce a new regularization that aids the joint estimation by requiring the CNN to act as a powerful denoiser. The proposed self-guided method gives significantly improved image reconstructions for MRI with limited measurements compared to the conventional DIP and the reference-guided method while eliminating the need for any additional data. 
    more » « less
  2. null (Ed.)
    We propose a neural network architecture combined with specific training and inference procedures for linear inverse problems arising in computational imaging to reconstruct the underlying image and to represent the uncertainty about the reconstruction. The proposed architecture is built from the model-based reconstruction perspective, which enforces data consistency and eliminates the artifacts in an alternating manner. The training and the inference procedures are based on performing approximate Bayesian analysis on the weights of the proposed network using a variational inference method. The proposed architecture with the associated inference procedure is capable of characterizing uncertainty while performing reconstruction with a modelbased approach. We tested the proposed method on a simulated magnetic resonance imaging experiment. We showed that the proposed method achieved an adequate reconstruction capability and provided reliable uncertainty estimates in the sense that the regions having high uncertainty provided by the proposed method are likely to be the regions where reconstruction errors occur . 
    more » « less
  3. Posterior sampling has been shown to be a powerful Bayesian approach for solving imaging inverse problems. The recent plug-and-play unadjusted Langevin algorithm (PnP-ULA) has emerged as a promising method for Monte Carlo sampling and minimum mean squared error (MMSE) estimation by combining physical measurement models with deep-learning priors specified using image denoisers. However, the intricate relationship between the sampling distribution of PnP-ULA and the mismatched data-fidelity and denoiser has not been theoretically analyzed. We address this gap by proposing a posterior-L2 pseudometric and using it to quantify an explicit error bound for PnP-ULA under mismatched posterior distribution. We numerically validate our theory on several inverse problems such as sampling from Gaussian mixture models and image deblurring. Our results suggest that the sensitivity of the sampling distribution of PnP-ULA to a mismatch in the measurement model and the denoiser can be precisely characterized. 
    more » « less
  4. Abstract In the problem of spotlight mode airborne synthetic aperture radar (SAR) image formation, it is well-known that data collected over a wide azimuthal angle violate the isotropic scattering property typically assumed. Many techniques have been proposed to account for this issue, including both full-aperture and sub-aperture methods based on filtering, regularized least squares, and Bayesian methods. A full-aperture method that uses a hierarchical Bayesian prior to incorporate appropriate speckle modeling and reduction was recently introduced to produce samples of the posterior density rather than a single image estimate. This uncertainty quantification information is more robust as it can generate a variety of statistics for the scene. As proposed, the method was not well-suited for large problems, however, as the sampling was inefficient. Moreover, the method was not explicitly designed to mitigate the effects of the faulty isotropic scattering assumption. In this work we therefore propose a new sub-aperture SAR imaging method that uses a sparse Bayesian learning-type algorithm to more efficiently produce approximate posterior densities for each sub-aperture window. These estimates may be useful in and of themselves, or when of interest, the statistics from these distributions can be combined to form a composite image. Furthermore, unlike the often-employed ℓ p -regularized least squares methods, no user-defined parameters are required. Application-specific adjustments are made to reduce the typically burdensome runtime and storage requirements so that appropriately large images can be generated. Finally, this paper focuses on incorporating these techniques into SAR image formation process, that is, for the problem starting with SAR phase history data, so that no additional processing errors are incurred. The advantage over existing SAR image formation methods are clearly presented with numerical experiments using real-world data. 
    more » « less
  5. Double electron−electron resonance (DEER) spectroscopy measures distance distributions between spin labels in proteins, yielding important structural and energetic information about conformational landscapes. Analysis of an experimental DEER signal in terms of a distance distribution is a nontrivial task due to the ill-posed nature of the underlying mathematical inversion problem. This work introduces a Bayesian probabilistic inference approach to analyze DEER data, assuming a nonparametric distance distribution with a Tikhonov smoothness prior. The method uses Markov Chain Monte Carlo sampling with a compositional Gibbs sampler to determine a posterior probability distribution over the entire parameter space, including the distance distribution, given an experimental data set. This posterior contains all of the information available from the data, including a full quantification of the uncertainty about the model parameters. The corresponding uncertainty about the distance distribution is visually captured via an ensemble of posterior predictive distributions. Several examples are presented to illustrate the method. Compared with bootstrapping, it performs faster and provides slightly larger uncertainty intervals. 
    more » « less