Self‐supervised learning has shown great promise because of its ability to train deep learning (DL) magnetic resonance imaging (MRI) reconstruction methods without fully sampled data. Current self‐supervised learning methods for physics‐guided reconstruction networks split acquired undersampled data into two disjoint sets, where one is used for data consistency (DC) in the unrolled network, while the other is used to define the training loss. In this study, we propose an improved self‐supervised learning strategy that more efficiently uses the acquired data to train a physics‐guided reconstruction network without a database of fully sampled data. The proposed multi‐mask self‐supervised learning via data undersampling (SSDU) applies a holdout masking operation on the acquired measurements to split them into multiple pairs of disjoint sets for each training sample, while using one of these pairs for DC units and the other for defining loss, thereby more efficiently using the undersampled data. Multi‐mask SSDU is applied on fully sampled 3D knee and prospectively undersampled 3D brain MRI datasets, for various acceleration rates and patterns, and compared with the parallel imaging method, CG‐SENSE, and single‐mask SSDU DL‐MRI, as well as supervised DL‐MRI when fully sampled data are available. The results on knee MRI show that the proposed multi‐mask SSDU outperforms SSDU and performs as well as supervised DL‐MRI. A clinical reader study further ranks the multi‐mask SSDU higher than supervised DL‐MRI in terms of signal‐to‐noise ratio and aliasing artifacts. Results on brain MRI show that multi‐mask SSDU achieves better reconstruction quality compared with SSDU. The reader study demonstrates that multi‐mask SSDU at R = 8 significantly improves reconstruction compared with single‐mask SSDU at R = 8, as well as CG‐SENSE at R = 2. 
                        more » 
                        « less   
                    
                            
                            Self‐supervised learning of physics‐guided reconstruction neural networks without fully sampled reference data
                        
                    
    
            PurposeTo develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets. MethodsSelf‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground‐truth data, as well as conventional compressed‐sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics‐guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two‐fold accelerated high‐resolution brain data sets at different acceleration rates, and compared with parallel imaging. ResultsResults on five different knee sequences at an acceleration rate of 4 shows that the proposed self‐supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed‐sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground‐truth reference, show that the proposed self‐supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. ConclusionThe proposed SSDU approach allows training of physics‐guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1651825
- PAR ID:
- 10454136
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Magnetic Resonance in Medicine
- Volume:
- 84
- Issue:
- 6
- ISSN:
- 0740-3194
- Page Range / eLocation ID:
- p. 3172-3191
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract PurposeTo examine the effect of incorporating self‐supervised denoising as a pre‐processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K‐space data employed for training are typically multi‐coil and inherently noisy. Although DL‐based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise‐free datasets is impractical. MethodsWe leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL‐based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model‐Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL‐based methods in solving accelerated multi‐coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2‐weighted brain and fat‐suppressed proton‐density knee scans. ResultsWe observed that self‐supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal‐to‐noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2‐weighted brain data, and 24, 14, and 4 dB for fat‐suppressed knee data. ConclusionWe showed that denoising is an essential pre‐processing technique capable of improving the efficacy of DL‐based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise‐free reference MRI scans.more » « less
- 
            The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem. This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image. Multiple deep learning-based structures have been proposed for MRI reconstruction using CS, in both the k-space and image domains, and using unrolled optimization methods. However, the drawback of these structures is that they are not fully utilizing the information from both domains (k-space and image). Herein, we propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domains. We evaluate our method on a well-known open-source MRI dataset (652 brain cases and 1172 knee cases) and a clinical MRI dataset of 243 patients diagnosed with strokes from our institution to demonstrate the performance of our network. Our model achieves an overall peak signal-to-noise ratio/structural similarity of 40.92 ± 0.29/0.9577 ± 0.0025 (fourfold) and 37.03 ± 0.25/0.9365 ± 0.0029 (eightfold) for the brain dataset, 31.09 ± 0.25/0.6901 ± 0.0094 (fourfold) and 29.49 ± 0.22/0.6197 ± 0.0106 (eightfold) for the knee dataset, and 36.32 ± 0.16/0.9199 ± 0.0029 (20-fold) and 33.70 ± 0.15/0.8882 ± 0.0035 (30-fold) for the stroke dataset. In addition to quantitative evaluation, we undertook a blinded comparison of image quality across networks performed by a subspecialty trained radiologist. Overall, we demonstrate that our network achieves a superior performance among others under multiple reconstruction tasks.more » « less
- 
            Parallel magnetic resonance imaging (MRI) is a widely-used technique that accelerates data collection by making use of the spatial encoding provided by multiple receiver coils. A key issue in parallel MRI is the estimation of coil sensitivity maps (CSMs) that are used for reconstructing a single high-quality image. This paper addresses this issue by developing SS-JIRCS, a new self-supervised model-based deep-learning (DL) method for image reconstruction that is equipped with automated CSM calibration. Our deep network consists of three types of modules: data-consistency, regularization, and CSM calibration. Unlike traditional supervised DL methods, these modules are directly trained on undersampled and noisy k-space data rather than on fully sampled high-quality ground truth. We present empirical results on simulated data that show the potential of the proposed method for achieving better performance than several baseline methods.more » « less
- 
            Abstract PurposeTo introduce a novel deep model‐based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k‐space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. MethodsSPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high‐quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA‐based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self‐supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. ResultsWe validate SPICER on both open‐access datasets and experimentally collected data, showing that it can achieve state‐of‐the‐art performance in highly accelerated data acquisition settings (up to ). Our results also highlight the importance of different modules of SPICER—including the DMBA, the CSM estimation, and the SPICER training loss—on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre‐estimation methods especially when the ACS data is limited. ConclusionDespite being trained on noisy undersampled data, SPICER can reconstruct high‐quality images and CSMs in highly undersampled settings, which outperforms other self‐supervised learning methods and matches the performance of the well‐known E2E‐VarNet trained on fully sampled ground‐truth data.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
