Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to highstakes applications such as medical imaging and weather forecasting. Conditional diffusion models’ breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting. Source code is publicly available at https://github.com/matthewachan/hyperdm.
more »
« less
Model-Based Bayesian Deep Learning Architecture for Linear Inverse Problems in Computational Imaging
We propose a neural network architecture combined with specific training and inference procedures for linear inverse problems arising in computational imaging to reconstruct the underlying image and to represent the uncertainty about the reconstruction. The proposed architecture is built from the model-based reconstruction perspective, which enforces data consistency and eliminates the artifacts in an alternating manner. The training and the inference procedures are based on performing approximate Bayesian analysis on the weights of the proposed network using a variational inference method. The proposed architecture with the associated inference procedure is capable of characterizing uncertainty while performing reconstruction with a modelbased approach. We tested the proposed method on a simulated magnetic resonance imaging experiment. We showed that the proposed method achieved an adequate reconstruction capability and provided reliable uncertainty estimates in the sense that the regions having high uncertainty provided by the proposed method are likely to be the regions where reconstruction errors occur .
more »
« less
- Award ID(s):
- 1934962
- PAR ID:
- 10288818
- Date Published:
- Journal Name:
- Electronic Imaging
- Volume:
- 2021
- Issue:
- 15
- ISSN:
- 2470-1173
- Page Range / eLocation ID:
- 201-1 to 201-7
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The conventional reconstruction method of off-axis digital holographic microscopy (DHM) relies on computational processing that involves spatial filtering of the sample spectrum and tilt compensation between the interfering waves to accurately reconstruct the phase of a biological sample. Additional computational procedures such as numerical focusing may be needed to reconstruct free-of-distortion quantitative phase images based on the optical configuration of the DHM system. Regardless of the implementation, any DHM computational processing leads to long processing times, hampering the use of DHM for video-rate renderings of dynamic biological processes. In this study, we report on a conditional generative adversarial network (cGAN) for robust and fast quantitative phase imaging in DHM. The reconstructed phase images provided by the GAN model present stable background levels, enhancing the visualization of the specimens for different experimental conditions in which the conventional approach often fails. The proposed learning-based method was trained and validated using human red blood cells recorded on an off-axis Mach–Zehnder DHM system. After proper training, the proposed GAN yields a computationally efficient method, reconstructing DHM images seven times faster than conventional computational approaches.more » « less
-
In this paper we present Sniffer Faster R-CNN++, an efficient Camera-LiDAR late fusion network for low complexity and accurate object detection in autonomous driving scenarios. The proposed detection network architecture operates on output candidates of any 3D detector and proposals from regional proposal network of any 2D detector to generate final prediction results. In comparison to the single modality object detection approaches, fusion based methods in many instances suffer from dissimilar data integration difficulties. On one hand, fusion based network models are complicated in nature and on the other hand they require large computational overhead and resources, processing pipelines for training and inference specially, the early fusion and deep fusion approaches. As such, we devise a late fusion network that in-cooperates pre-trained, single-modality detectors without change, performing association only at the detection level. In addition to this, lidar based method fail to detect distant object due to its sparse nature so we devise proposal refinement algorithm to jointly optimize detection candidates and assist detection for distant objects. Extensive experiments on both the 3D and 2D detection benchmark of challenging KITTI dataset illustrate that our proposed network architecture significantly improves the detection accuracy, accelerating the detection speed.more » « less
-
Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered.more » « less
-
We consider the inverse problem of determining the geometry of penetrable objects from scattering data generated by one incident wave at a fixed frequency. We first study an orthogonality sampling type method which is fast, simple to implement, and robust against noise in the data. This sampling method has a new imaging functional that is applicable to data measured in near field or far field regions. The resolution analysis of the imaging functional is analyzed where the explicit decay rate of the functional is established. A connection with the orthogonality sampling method by Potthast is also studied. The sampling method is then combined with a deep neural network to solve the inverse scattering problem. This combined method can be understood as a network using the image computed by the sampling method for the first layer and followed by the U-net architecture for the rest of the layers. The fast computation and the knowledge from the results of the sampling method help speed up the training of the network. The combination leads to a significant improvement in the reconstruction results initially obtained by the sampling method. The combined method is also able to invert some limited aperture experimental data without any additional transfer training.more » « less
An official website of the United States government

