Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available June 9, 2026
-
Abstract PurposeTo examine the effect of incorporating self‐supervised denoising as a pre‐processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K‐space data employed for training are typically multi‐coil and inherently noisy. Although DL‐based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise‐free datasets is impractical. MethodsWe leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL‐based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model‐Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL‐based methods in solving accelerated multi‐coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2‐weighted brain and fat‐suppressed proton‐density knee scans. ResultsWe observed that self‐supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal‐to‐noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2‐weighted brain data, and 24, 14, and 4 dB for fat‐suppressed knee data. ConclusionWe showed that denoising is an essential pre‐processing technique capable of improving the efficacy of DL‐based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise‐free reference MRI scans.more » « lessFree, publicly-accessible full text available June 2, 2026
-
Free, publicly-accessible full text available May 10, 2026
-
Free, publicly-accessible full text available April 22, 2026
-
We provide a framework for solving inverse problems with diffusion models learned from linearly corrupted data. Firstly, we extend the Ambient Diffusion framework to enable training directly from measurements corrupted in the Fourier domain. Subsequently, we train diffusion models for MRI with access only to Fourier sub- sampled multi-coil measurements at acceleration factors R= 2,4,6,8. Secondly, we propose Ambient Diffusion Posterior Sampling (A-DPS), a reconstruction al- gorithm that leverages generative models pre-trained on one type of corruption (e.g. image inpainting) to perform posterior sampling on measurements from a different forward process (e.g. image blurring). For MRI reconstruction in high acceleration regimes, we observe that A-DPS models trained on subsampled data are better suited to solving inverse problems than models trained on fully sampled data. We also test the efficacy of A-DPS on natural image datasets (CelebA, FFHQ, and AFHQ) and show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance.more » « lessFree, publicly-accessible full text available April 24, 2026
-
Free, publicly-accessible full text available April 22, 2026
-
Modern Datalog engines (e.g., LogicBlox, Soufflé, ddlog) enable their users to write declarative queries which com- pute recursive deductions over extensional facts, leaving high-performance operationalization (query planning, semi- naïve evaluation, and parallelization) to the engine. Such engines form the backbone of modern high-throughput ap- plications in static analysis, network monitoring, and social- media mining. In this paper, we present a methodology for implementing a modern in-memory Datalog engine on data center GPUs, allowing us to achieve significant (up to 45×) gains compared to Soufflé (a modern CPU-based en- gine) on context-sensitive points-to analysis of PostgreSQL. We present GPUlog, a Datalog engine backend that imple- ments iterated relational algebra kernels over a novel range- indexed data structure we call the hash-indexed sorted ar- ray (HISA). HISA combines the algorithmic benefits of in- cremental range-indexed relations with the raw computa- tion throughput of operations over dense data structures. Our experiments show that GPUlog is significantly faster than CPU-based Datalog engines while achieving a favorable memory footprint compared to contemporary GPU-based joins.more » « lessFree, publicly-accessible full text available March 30, 2026
-
Free, publicly-accessible full text available February 25, 2026
An official website of the United States government
