Self‐supervised learning has shown great promise because of its ability to train deep learning (DL) magnetic resonance imaging (MRI) reconstruction methods without fully sampled data. Current self‐supervised learning methods for physics‐guided reconstruction networks split acquired undersampled data into two disjoint sets, where one is used for data consistency (DC) in the unrolled network, while the other is used to define the training loss. In this study, we propose an improved self‐supervised learning strategy that more efficiently uses the acquired data to train a physics‐guided reconstruction network without a database of fully sampled data. The proposed multi‐mask self‐supervised learning via data undersampling (SSDU) applies a holdout masking operation on the acquired measurements to split them into multiple pairs of disjoint sets for each training sample, while using one of these pairs for DC units and the other for defining loss, thereby more efficiently using the undersampled data. Multi‐mask SSDU is applied on fully sampled 3D knee and prospectively undersampled 3D brain MRI datasets, for various acceleration rates and patterns, and compared with the parallel imaging method, CG‐SENSE, and single‐mask SSDU DL‐MRI, as well as supervised DL‐MRI when fully sampled data are available. The results on knee MRI show that the proposed multi‐mask SSDU outperforms SSDU and performs as well as supervised DL‐MRI. A clinical reader study further ranks the multi‐mask SSDU higher than supervised DL‐MRI in terms of signal‐to‐noise ratio and aliasing artifacts. Results on brain MRI show that multi‐mask SSDU achieves better reconstruction quality compared with SSDU. The reader study demonstrates that multi‐mask SSDU at R = 8 significantly improves reconstruction compared with single‐mask SSDU at R = 8, as well as CG‐SENSE at R = 2.
more »
« less
Robust Self-Guided Deep Image Prior
In this work, we study the deep image prior (DIP) for reconstruction problems in magnetic resonance imaging (MRI). DIP has become a popular approach for image reconstruction, where it recovers the clear image by fitting an overparameterized convolutional neural network (CNN) to the corrupted/undersampled measurements. To improve the performance of DIP, recent work shows that using a reference image as an input often leads to improved reconstruction results compared to vanilla DIP with random input. However, obtaining the reference input image often requires supervision and hence is difficult in practice. In this work, we propose a self-guided reconstruction scheme that uses no training data other than the set of undersampled measurements to simultaneously estimate the network weights and input (reference). We introduce a new regularization that aids the joint estimation by requiring the CNN to act as a powerful denoiser. The proposed self-guided method gives significantly improved image reconstructions for MRI with limited measurements compared to the conventional DIP and the reference-guided method while eliminating the need for any additional data.
more »
« less
- Award ID(s):
- 2212066
- PAR ID:
- 10525919
- Publisher / Repository:
- IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract PurposeTo introduce a novel deep model‐based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k‐space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. MethodsSPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high‐quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA‐based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self‐supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. ResultsWe validate SPICER on both open‐access datasets and experimentally collected data, showing that it can achieve state‐of‐the‐art performance in highly accelerated data acquisition settings (up to ). Our results also highlight the importance of different modules of SPICER—including the DMBA, the CSM estimation, and the SPICER training loss—on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre‐estimation methods especially when the ACS data is limited. ConclusionDespite being trained on noisy undersampled data, SPICER can reconstruct high‐quality images and CSMs in highly undersampled settings, which outperforms other self‐supervised learning methods and matches the performance of the well‐known E2E‐VarNet trained on fully sampled ground‐truth data.more » « less
-
Recently, Deep Image Prior (DIP) has emerged as an effective unsupervised one-shot learner, delivering competitive results across various image recovery problems. This method only requires the noisy measurements and a forward operator, relying solely on deep networks initialized with random noise to learn and restore the structure of the data. However, DIP is notorious for its vulnerability to overfitting due to the overparameterization of the network. Building upon insights into the impact of the DIP input and drawing inspiration from the gradual denoising process in cutting-edge diffusion models, we introduce Autoencoding Sequential DIP (aSeqDIP) for image reconstruction. This method progressively denoises and reconstructs the image through a sequential optimization of network weights. This is achieved using an input-adaptive DIP objective, combined with an autoencoding regularization term. Compared to diffusion models, our method does not require training data and outperforms other DIP-based methods in mitigating noise overfitting while maintaining a similar number of parameter updates as Vanilla DIP. Through extensive experiments, we validate the effectiveness of our method in various image reconstruction tasks, such as MRI and CT reconstruction, as well as in image restoration tasks like image denoising, inpainting, and non-linear deblurring.more » « less
-
PurposeTo develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets. MethodsSelf‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground‐truth data, as well as conventional compressed‐sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics‐guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two‐fold accelerated high‐resolution brain data sets at different acceleration rates, and compared with parallel imaging. ResultsResults on five different knee sequences at an acceleration rate of 4 shows that the proposed self‐supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed‐sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground‐truth reference, show that the proposed self‐supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. ConclusionThe proposed SSDU approach allows training of physics‐guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.more » « less
-
We consider an MRI reconstruction problem with input of k-space data at a very low undersampled rate. This can prac- tically benefit patient due to reduced time of MRI scan, but it is also challenging since quality of reconstruction may be compromised. Currently, deep learning based methods dom- inate MRI reconstruction over traditional approaches such as Compressed Sensing, but they rarely show satisfactory performance in the case of low undersampled k-space data. One explanation is that these methods treat channel-wise fea- tures equally, which results in degraded representation ability of the neural network. To solve this problem, we propose a new model called MRI Cascaded Channel-wise Attention Network (MICCAN), highlighted by three components: (i) a variant of U-net with Channel-wise Attention (UCA) mod- ule, (ii) a long skip connection and (iii) a combined loss. Our model is able to attend to salient information by filtering irrelevant features and also concentrate on high-frequency in- formation by enforcing low-frequency information bypassed to the final output. We conduct both quantitative evaluation and qualitative analysis of our method on a cardiac dataset. The experiment shows that our method achieves very promis- ing results in terms of three common metrics on the MRI reconstruction with low undersampled k-space data. Code is public availablemore » « less
An official website of the United States government

