skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2239687

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract PurposeTo examine the effect of incorporating self‐supervised denoising as a pre‐processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K‐space data employed for training are typically multi‐coil and inherently noisy. Although DL‐based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise‐free datasets is impractical. MethodsWe leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL‐based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model‐Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL‐based methods in solving accelerated multi‐coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2‐weighted brain and fat‐suppressed proton‐density knee scans. ResultsWe observed that self‐supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal‐to‐noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2‐weighted brain data, and 24, 14, and 4 dB for fat‐suppressed knee data. ConclusionWe showed that denoising is an essential pre‐processing technique capable of improving the efficacy of DL‐based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise‐free reference MRI scans. 
    more » « less
    Free, publicly-accessible full text available June 2, 2026
  2. Abstract PurposeThe aim of this work is to develop a method to solve the ill‐posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. MethodsThe proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion‐free image and rigid motion estimates from subsampled and motion‐corrupt two‐dimensional (2D) k‐space data. ResultsWe demonstrate the ability to reconstruct motion‐free images from accelerated two‐dimensional (2D) Cartesian and non‐Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. ConclusionWe propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions. 
    more » « less
  3. Abstract MRI acquisition and reconstruction research has transformed into a computation-driven field. As methods become more sophisticated, compute-heavy, and data-hungry, efforts to reproduce them become more difficult. While the computational MRI research community has made great leaps toward reproducible computational science, there are few tailored guidelines or standards for users to follow. In this review article, we develop a cookbook to facilitate reproducible research for MRI acquisition and reconstruction. Like any good cookbook, we list several recipes, each providing a basic standard on how to make computational MRI research reproducible. And like cooking, we show example flavours where reproducibility may fail due to under-specification. We structure the article, so that the cookbook itself serves as an example of reproducible research by providing sequence and reconstruction definitions as well as data to reproduce the experimental results in the figures. We also propose a community-driven effort to compile an evolving list of best practices for making computational MRI research reproducible. 
    more » « less
  4. Free, publicly-accessible full text available December 1, 2026
  5. Purpose: Magnetic Resonance Imaging (MRI) enables non‐invasive assessment of brain abnormalities during early life development. Permanent magnet scanners operating in the neonatal intensive care unit (NICU) facilitate MRI of sick infants, but have long scan times due to lower signal‐to‐noise ratios (SNR) and limited receive coils. This work accelerates in‐NICU MRI with diffusion probabilistic generative models by developing a training pipeline accounting for these challenges. Methods: We establish a novel training dataset of clinical, 1 Tesla neonatal MR images in collaboration with Aspect Imaging and Sha'are Zedek Medical Center. We propose a pipeline to handle the low quantity and SNR of our real‐world dataset (1) modifying existing network architectures to support varying resolutions; (2) training a single model on all data with learned class embedding vectors; (3) applying self‐supervised denoising before training; and (4) reconstructing by averaging posterior samples. Retrospective under‐sampling experiments, accounting for signal decay, evaluated each item of our proposed methodology. A clinical reader study with practicing pediatric neuroradiologists evaluated our proposed images reconstructed from under‐sampled data. Results: Combining all data, denoising pre‐training, and averaging posterior samples yields quantitative improvements in reconstruction. The generative model decouples the learned prior from the measurement model and functions at two acceleration rates without re‐training. The reader study suggests that proposed images reconstructed from under‐sampled data are adequate for clinical use. Conclusion: Diffusion probabilistic generative models applied with the proposed pipeline to handle challenging real‐world datasets could reduce the scan time of in‐NICU neonatal MRI. 
    more » « less
    Free, publicly-accessible full text available June 17, 2026
  6. Free, publicly-accessible full text available May 10, 2026
  7. Free, publicly-accessible full text available May 10, 2026
  8. We provide a framework for solving inverse problems with diffusion models learned from linearly corrupted data. Firstly, we extend the Ambient Diffusion framework to enable training directly from measurements corrupted in the Fourier domain. Subsequently, we train diffusion models for MRI with access only to Fourier sub- sampled multi-coil measurements at acceleration factors R= 2,4,6,8. Secondly, we propose Ambient Diffusion Posterior Sampling (A-DPS), a reconstruction al- gorithm that leverages generative models pre-trained on one type of corruption (e.g. image inpainting) to perform posterior sampling on measurements from a different forward process (e.g. image blurring). For MRI reconstruction in high acceleration regimes, we observe that A-DPS models trained on subsampled data are better suited to solving inverse problems than models trained on fully sampled data. We also test the efficacy of A-DPS on natural image datasets (CelebA, FFHQ, and AFHQ) and show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance. 
    more » « less
    Free, publicly-accessible full text available April 24, 2026
  9. Motivation: We explore the “Implicit Data Crime” of datasets whose subsampled k-space is filled using parallel imaging. These datasets are treated as fully-sampled, but their points derive from (1)prospective sampling, and (2)reconstruction of un-sampled points, creating artificial data correlations given low SNR or high acceleration. Goal(s): How will downstream tasks, including reconstruction algorithm comparison and optimal trajectory design, be biased by effects of parallel imaging on a prospectively undersampled dataset? Approach: Comparing reconstruction performance using data that are fully sampled with data that are completed using the SENSE algorithm. Results: Utilizing parallel imaging filled k-space results in biased downstream perception of algorithm performance. Impact: This study demonstrates evidence of overly-optimistic bias resulting from the use of k-space filled in with parallel imaging as ground truth data. Researchers should be aware of this possibility and carefully examine the computational pipeline behind datasets they use. 
    more » « less