skip to main content

Title: Robust Compressed Sensing MRI with Deep Generative Priors
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deepgenerative priors can be powerful tools for solving inverse problems.However, to date this framework has been empirically successful only oncertain datasets (for example, human faces and MNIST digits), and itis known to perform poorly on out-of-distribution samples. In thispaper, we present the first successful application of the CSGMframework on clinical MRI data. We train a generative prior on brainscans from the fastMRI dataset, and show that posterior sampling viaLangevin dynamics achieves high quality reconstructions. Furthermore,our experiments and theory show that posterior sampling is robust tochanges in the ground-truth distribution and measurement process.Our code and models are available at: \url{https://github.com/utcsilab/csgm-mri-langevin}.
Authors:
; ; ; ; ;
Award ID(s):
1751040 2008868
Publication Date:
NSF-PAR ID:
10336434
Journal Name:
Advances in neural information processing systems
Volume:
34
ISSN:
1049-5258
Sponsoring Org:
National Science Foundation
More Like this
  1. Optical tomography is the process of reconstructing the optical properties of biological tissue using measurements of incoming and outgoing light intensity at the tissue boundary. Mathematically, light propagation is modeled by the radiative transfer equation (RTE), and optical tomography amounts to reconstructing the scattering coefficient in the RTE using the boundary measurements. In the strong scattering regime, the RTE is asymptotically equivalent to the diffusion equation (DE), and the inverse problem becomes reconstructing the diffusion coefficient using Dirichlet and Neumann data on the boundary. We study this problem in the Bayesian framework, meaning that we examine the posterior distribution of the scattering coefficient after the measurements have been taken. However, sampling from this distribution is computationally expensive, since to evaluate each Markov Chain Monte Carlo (MCMC) sample, one needs to run the RTE solvers multiple times. We therefore propose the DE-assisted two-level MCMC technique, in which bad samples are filtered out using DE solvers that are significantly cheaper than RTE solvers. This allows us to make sampling from the RTE posterior distribution computationally feasible.
  2. Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks. These networks are often trained end-to-end to directly reconstruct an image from a noisy or corrupted measurement of that image. To achieve state-of-the-art performance, training on large and diverse sets of images is considered critical. However, it is often difficult and/or expensive to collect large amounts of training images. Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data in a variety of settings. Our DA pipeline, MRAugment, is specifically designed to utilize the invariances present in medical imaging measurements as naive DA strategies that neglect the physics of the problem fail. Through extensive studies on multiple datasets we demonstrate that in the low-data regime DA prevents overfitting and can match or even surpass the state of the art while using significantly fewer training data, whereas in the high-data regime it has diminishing returns. Furthermore, our findings show that DA improves the robustness of the model against various shifts in the test distribution.
  3. Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks. These networks are often trained end-to-end to directly reconstruct an image from a noisy or corrupted measurement of that image. To achieve state-of-the-art performance, training on large and diverse sets of images is considered critical. However, it is often difficult and/or expensive to collect large amounts of training images. Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data in a variety of settings. Our DA pipeline, MRAugment, is specifically designed to utilize the invariances present in medical imaging measurements as naive DA strategies that neglect the physics of the problem fail. Through extensive studies on multiple datasets we demonstrate that in the low-data regime DA prevents overfitting and can match or even surpass the state of the art while using significantly fewer training data, whereas in the high-data regime it has diminishing returns. Furthermore, our findings show that DA can improve the robustness of the model against various shifts in the test distribution.
  4. Electrification of vehicles is becoming one of the main avenues for decarbonization of the transportation market. To reduce stress on the energy grid, large-scale charging will require optimal scheduling of when electricity is delivered to vehicles. Coordinated electric-vehicle charging can produce optimal, flattened loads that would improve reliability of the power system as well as reduce system costs and emissions. However, a challenge for successful introduction of coordinated deadline-scheduling of residential charging comes from the demand side: customers would need to be willing both to defer charging their vehicles and to accept less than a 100% target for battery charge. Within a coordinated electric-vehicle charging pilot run by the local utility in upstate New York, this study analyzes the necessary incentives for customers to accept giving up control of when charging of their vehicles takes place. Using data from a choice experiment implemented in an online survey of electric-vehicle owners and lessees in upstate New York (N=462), we make inference on the willingness to pay for features of hypothetical coordinated electric-vehicle charging programs. To address unobserved preference heterogeneity, we apply Variational Bayes (VB) inference to a mixed logit model. Stochastic variational inference has recently emerged as a fast and computationally-efficientmore »alternative to Markov chain Monte Carlo (MCMC) methods for scalable Bayesian estimation of discrete choice models. Our results show that individuals negatively perceive the duration of the timeframe in which the energy provider would be allowed to defer charging, even though both the desired target for battery charge and deadline would be respected. This negative monetary valuation is evidenced by an expected average reduction in the annual fee of joining the charging program of $2.64 per hour of control yielded to the energy provider. Our results also provide evidence of substantial heterogeneity in preferences. For example, the 25% quantile of the posterior distribution of the mean of the willingness to accept an additional hour of control yielded to the utility is $5.16. However, the negative valuation of the timeframe for deferring charging is compensated by positive valuation of emission savings coming from switching charging to periods of the day with a higher proportion of generation from renewable sources. Customers also positively valued discounts in the price of energy delivery.« less
  5. This work studies online learning-based trajectory planning for multiple autonomous underwater vehicles (AUVs) to estimate a water parameter field of interest in the under-ice environment. A centralized system is considered, where several fixed access points on the ice layer are introduced as gateways for communications between the AUVs and a remote data fusion center. We model the water parameter field of interest as a Gaussian process with unknown hyper-parameters. The AUV trajectories for sampling are determined on an epoch-by-epoch basis. At the end of each epoch, the access points relay the observed field samples from all the AUVs to the fusion center, which computes the posterior distribution of the field based on the Gaussian process regression and estimates the field hyper-parameters. The optimal trajectories of all the AUVs in the next epoch are determined to maximize a long-term reward that is defined based on the field uncertainty reduction and the AUV mobility cost, subject to the kinematics constraint, the communication constraint and the sensing area constraint. We formulate the adaptive trajectory planning problem as a Markov decision process (MDP). A reinforcement learning-based online learning algorithm is designed to determine the optimal AUV trajectories in a constrained continuous space. Simulation resultsmore »show that the proposed learning-based trajectory planning algorithm has performance similar to a benchmark method that assumes perfect knowledge of the field hyper-parameters.« less