Reconstructing images from multi-view projections is a crucial task both in the computer vision community and in the medical imaging community, and dynamic positron emission tomography (PET) is no exception. Unfortunately, image quality is inevitably degraded by the limitations of photon emissions and the trade-off between temporal and spatial resolution. In this paper, we develop a novel tensor based nonlocal low-rank framework for dynamic PET reconstruction. Spatial structures are effectively enhanced not only by nonlocal and sparse features, but momentarily by tensor-formed low-rank approximations in the temporal realm. Moreover, the total variation is well regularized as a complementation for denoising. These regularizations are efficiently combined into a Poisson PET model and jointly solved by distributed optimization. The experiments demonstrated in this paper validate the excellent performance of the proposed method in dynamic PET.
more »
« less
Dynamic low-count PET image reconstruction using spatio-temporal primal dual network
Objective. Dynamic positron emission tomography (PET) imaging, which can provide information on dynamic changes in physiological metabolism, is now widely used in clinical diagnosis and cancer treatment. However, the reconstruction from dynamic data is extremely challenging due to the limited counts received in individual frame, especially in ultra short frames. Recently, the unrolled modelbased deep learning methods have shown inspiring results for low-count PET image reconstruction with good interpretability. Nevertheless, the existing model-based deep learning methods mainly focus on the spatial correlations while ignore the temporal domain. Approach. In this paper, inspired by the learned primal dual (LPD) algorithm, we propose the spatio-temporal primal dual network (STPDnet) for dynamic low-count PET image reconstruction. Both spatial and temporal correlations are encoded by 3D convolution operators. The physical projection of PET is embedded in the iterative learning process of the network, which provides the physical constraints and enhances interpretability. Main results. The experiments of both simulation data and real rat scan data have shown that the proposed method can achieve substantial noise reduction in both temporal and spatial domains and outperform the maximum likelihood expectation maximization, spatio-temporal kernel method, LPD and FBPnet. Significance. Experimental results show STPDnet better reconstruction performance in the low count situation, which makes the proposed method particularly suitable in whole-body dynamic imaging and parametric PET imaging that require extreme short frames and usually suffer from high level of noise.
more »
« less
- Award ID(s):
- 2152961
- PAR ID:
- 10484719
- Publisher / Repository:
- IPEM
- Date Published:
- Journal Name:
- Physics in medicine and biology
- ISSN:
- 0031-9155
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep learning based PET image reconstruction methods have achieved promising results recently. However, most of these methods follow a supervised learning paradigm, which rely heavily on the availability of high-quality training labels. In particular, the long scanning time required and high radiation exposure associated with PET scans make obtaining these labels impractical. In this paper, we propose a dual-domain unsupervised PET image reconstruction method based on learned descent algorithm, which reconstructs high-quality PET images from sinograms without the need for image labels. Specifically, we unroll the proximal gradient method with a learnable norm for PET image reconstruction problem. The training is unsupervised, using measurement domain loss based on deep image prior as well as image domain loss based on rotation equivariance property. The experimental results demonstrate the superior performance of proposed method compared with maximum-likelihood expectation-maximization (MLEM), total-variation regularized EM (EM-TV) and deep image prior based method (DIP).more » « less
-
We investigate a primal-dual (PD) method for the saddle point problem (SPP) that uses a linear approximation of the primal function instead of the standard proximal step, resulting in a linearized PD (LPD) method. For convex-strongly concave SPP, we observe that the LPD method has a suboptimal dependence on the Lipschitz constant of the primal function. To fix this issue, we combine features of Accelerated Gradient Descent with the LPD method resulting in a single-loop Accelerated Linearized Primal-Dual (ALPD) method. ALPD method achieves the optimal gradient complexity when the SPP has a semi-linear coupling function. We also present an inexact ALPD method for SPPs with a general nonlinear coupling function that maintains the optimal gradient evaluations of the primal parts and significantly improves the gradient evaluations of the coupling term compared to the ALPD method. We verify our findings with numerical experiments.more » « less
-
Reconstruction of high-resolution extreme dynamic range images from a small number of low dynamic range (LDR) images is crucial for many computer vision applications. Current high dynamic range (HDR) cameras based on CMOS image sensor technology rely on multiexposure bracketing which suffers from motion artifacts and signal-to-noise (SNR) dip artifacts in extreme dynamic range scenes. Recently, single-photon cameras (SPCs) have been shown to achieve orders of magnitude higher dynamic range for passive imaging than conventional CMOS sensors. SPCs are becoming increasingly available commercially, even in some consumer devices. Unfortunately, current SPCs suffer from low spatial resolution. To overcome the limitations of CMOS and SPC sensors, we propose a learning-based CMOS-SPC fusion method to recover high-resolution extreme dynamic range images. We compare the performance of our method against various traditional and state-of-the-art baselines using both synthetic and experimental data. Our method outperforms these baselines, both in terms of visual quality and quantitative metrics.more » « less
-
Deep neural networks have been shown to be effective adaptive beamformers for ultrasound imaging. However, when training with traditional L p norm loss functions, model selection is difficult because lower loss values are not always associated with higher image quality. This ultimately limits the maximum achievable image quality with this approach and raises concerns about the optimization objective. In an effort to align the optimization objective with the image quality metrics of interest, we implemented a novel ultrasound-specific loss function based on the spatial lag-one coherence and signal-to-noise ratio of the delayed channel data in the short-time Fourier domain. We employed the R-Adam optimizer with look ahead and cyclical learning rate to make the training more robust to initialization and local minima, leading to better model performance and more reliable convergence. With our custom loss function and optimization scheme, we achieved higher contrast-to-noise-ratio, higher speckle signal-to-noise-ratio, and more accurate contrast ratio reconstruction than with previous deep learning and delay-and-sum beamforming approaches.more » « less