skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal Solution Characterization for Computational Imaging
Computational image reconstruction algorithms generally produce a single image without any measure of uncertainty or confidence. Regularized Maximum Likelihood (RML) and feed-forward deep learning approaches for inverse problems typically focus on recovering a point estimate. This is a serious limitation when working with under-determined imaging systems, where it is conceivable that multiple image modes would be consistent with the measured data. Characterizing the space of probable images that explain the observational data is therefore crucial. In this paper, we propose a variational deep probabilistic imaging approach to quantify reconstruction uncertainty. Deep Probabilistic Imaging (DPI) employs an untrained deep generative model to estimate a posterior distribution of an unobserved image. This approach does not require any training data; instead, it optimizes the weights of a neural network to generate image samples that fit a particular measurement dataset. Once the network weights have been learned, the posterior distribution can be efficiently sampled. We demonstrate this approach in the context of interferometric radio imaging, which is used for black hole imaging with the Event Horizon Telescope, and compressed sensing Magnetic Resonance Imaging (MRI).  more » « less
Award ID(s):
1935980
PAR ID:
10437836
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
Volume:
35
Issue:
3
ISSN:
2159-5399
Page Range / eLocation ID:
2628 to 2637
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we study the deep image prior (DIP) for reconstruction problems in magnetic resonance imaging (MRI). DIP has become a popular approach for image reconstruction, where it recovers the clear image by fitting an overparameterized convolutional neural network (CNN) to the corrupted/undersampled measurements. To improve the performance of DIP, recent work shows that using a reference image as an input often leads to improved reconstruction results compared to vanilla DIP with random input. However, obtaining the reference input image often requires supervision and hence is difficult in practice. In this work, we propose a self-guided reconstruction scheme that uses no training data other than the set of undersampled measurements to simultaneously estimate the network weights and input (reference). We introduce a new regularization that aids the joint estimation by requiring the CNN to act as a powerful denoiser. The proposed self-guided method gives significantly improved image reconstructions for MRI with limited measurements compared to the conventional DIP and the reference-guided method while eliminating the need for any additional data. 
    more » « less
  2. null (Ed.)
    We propose a neural network architecture combined with specific training and inference procedures for linear inverse problems arising in computational imaging to reconstruct the underlying image and to represent the uncertainty about the reconstruction. The proposed architecture is built from the model-based reconstruction perspective, which enforces data consistency and eliminates the artifacts in an alternating manner. The training and the inference procedures are based on performing approximate Bayesian analysis on the weights of the proposed network using a variational inference method. The proposed architecture with the associated inference procedure is capable of characterizing uncertainty while performing reconstruction with a modelbased approach. We tested the proposed method on a simulated magnetic resonance imaging experiment. We showed that the proposed method achieved an adequate reconstruction capability and provided reliable uncertainty estimates in the sense that the regions having high uncertainty provided by the proposed method are likely to be the regions where reconstruction errors occur . 
    more » « less
  3. Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered. 
    more » « less
  4. Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to highstakes applications such as medical imaging and weather forecasting. Conditional diffusion models’ breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting. Source code is publicly available at https://github.com/matthewachan/hyperdm. 
    more » « less
  5. In this paper, we present a polarimetric image restoration approach that aims to recover the Stokes parameters and the degree of linear polarization from their corresponding degraded counterparts. The Stokes parameters and the degree of linear polarization are affected due to the degradations present in partial occlusion or turbid media, such as scattering, attenuation, and turbid water. The polarimetric image restoration with corresponding Mueller matrix estimation is performed using polarization-informed deep learning and 3D Integral imaging. An unsupervised image-to-image translation (UNIT) framework is utilized to obtain clean Stokes parameters from the degraded ones. Additionally, a multi-output convolutional neural network (CNN) based branch is used to predict the Mueller matrix estimate along with an estimate of the corresponding residue. The degree of linear polarization with the Mueller matrix estimate generates information regarding the characteristics of the underlying transmission media and the object under consideration. The approach has been evaluated under different environmentally degraded conditions, such as various levels of turbidity and partial occlusion. The 3D integral imaging reduces the effects of degradations in a turbid medium. The performance comparison between 3D and 2D imaging in varying scene conditions is provided. Experimental results suggest that the proposed approach is promising under the scene degradations considered. To the best of our knowledge, this is the first report on polarization-informed deep learning in 3D imaging, which attempts to recover the polarimetric information along with the corresponding Mueller matrix estimate in a degraded environment. 
    more » « less