A body of studies has proposed to obtain high-quality images from low-dose and noisy Computed Tomography (CT) scans for radiation reduction. However, these studies are designed for population-level data without considering the variation in CT devices and individuals, limiting the current approaches' performance, especially for ultra-low-dose CT imaging. Here, we proposed PIMA-CT, a physical anthropomorphic phantom model integrating an unsupervised learning framework, using a novel deep learning technique called Cyclic Simulation and Denoising (CSD), to address these limitations. We first acquired paired low-dose and standard-dose CT scans of the phantom and then developed two generative neural networks: noise simulator and denoiser. The simulator extracts real low-dose noise and tissue features from two separate image spaces (e.g., low-dose phantom model scans and standard-dose patient scans) into a unified feature space. Meanwhile, the denoiser provides feedback to the simulator on the quality of the generated noise. In this way, the simulator and denoiser cyclically interact to optimize network learning and ease the denoiser to simultaneously remove noise and restore tissue features. We thoroughly evaluate our method for removing both real low-dose noise and Gaussian simulated low-dose noise. The results show that CSD outperforms one of the state-of-the-art denoising algorithms without using anymore »
Generalizability test of a deep learning-based CT image denoising method
Deep learning (DL) has been increasingly explored in low-dose CT image denoising. DL products have also been submitted to the FDA for premarket clearance. While having the potential to improve image quality over the filtered back projection method (FBP) and produce images quickly, generalizability of DL approaches is a major concern because the performance of a DL network can depend highly on the training data. In this work we take a residual encoder-decoder convolutional neural network (REDCNN)-based CT denoising method as an example. We investigate the effect of the scan parameters associated with the training data on the performance of this DL-based CT denoising method and identifies the scan parameters that may significantly impact its performance generalizability. This abstract particularly examines these three parameters: reconstruction kernel, dose level and slice thickness. Our preliminary results indicate that the DL network may not generalize well between FBP reconstruction kernels, but is insensitive to slice thickness for slice-wise denoising. The results also suggest that training with mixed dose levels improves denoising performance.
- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- The 6th International Conference on Image Formation in X-Ray Computed Tomography
- Sponsoring Org:
- National Science Foundation
More Like this
Yap, Pew-Thian (Ed.)Diffusion weighted imaging (DWI) with multiple, high b-values is critical for extracting tissue microstructure measurements; however, high b-value DWI images contain high noise levels that can overwhelm the signal of interest and bias microstructural measurements. Here, we propose a simple denoising method that can be applied to any dataset, provided a low-noise, single-subject dataset is acquired using the same DWI sequence. The denoising method uses a one-dimensional convolutional neural network (1D-CNN) and deep learning to learn from a low-noise dataset, voxel-by-voxel. The trained model can then be applied to high-noise datasets from other subjects. We validated the 1D-CNN denoising method by first demonstrating that 1D-CNN denoising resulted in DWI images that were more similar to the noise-free ground truth than comparable denoising methods, e.g., MP-PCA, using simulated DWI data. Using the same DWI acquisition but reconstructed with two common reconstruction methods, i.e. SENSE1 and sum-of-square, to generate a pair of low-noise and high-noise datasets, we then demonstrated that 1D-CNN denoising of high-noise DWI data collected from human subjects showed promising results in three domains: DWI images, diffusion metrics, and tractography. In particular, the denoised images were very similar to a low-noise reference image of that subject, more than the similaritymore »
Introduction Multi-series CT (MSCT) scans, including non-contrast CT (NCCT), CT Perfusion (CTP), and CT Angiography (CTA), are widely used in acute stroke imaging. While each scan has its advantage in disease diagnosis, the varying image resolution of different series hinders the ability of the radiologist to discern subtle suspicious findings. Besides, higher image quality requires high radiation doses, leading to increases in health risks such as cataract formation and cancer induction. Thus, it is highly crucial to develop an approach to improve MSCT resolution and to lower radiation exposure. Hypothesis MSCT imaging of the same patient is highly correlated in structural features, the transferring and integration of the shared and complementary information from different series are beneficial for achieving high image quality. Methods We propose TL-GAN, a learning-based method by using Transfer Learning (TL) and Generative Adversarial Network (GAN) to reconstruct high-quality diagnostic images. Our TL-GAN method is evaluated on 4,382 images collected from nine patients’ MSCT scans, including 415 NCCT slices, 3,696 CTP slices, and 271 CTA slices. We randomly split the nine patients into a training set (4 patients), a validation set (2 patients), and a testing set (3 patients). In preprocessing, we remove the background and skullmore »
Static coded aperture x-ray tomography was introduced recently where a static illumination pattern is used to interrogate an object with a low radiation dose, from which an accurate 3D reconstruction of the object can be attained computationally. Rather than continuously switching the pattern of illumination with each view angle, as traditionally done, static code computed tomography (CT) places a single pattern for all views. The advantages are many, including the feasibility of practical implementation. This paper generalizes this powerful framework to develop single-scan dual-energy coded aperture spectral tomography that enables material characterization at a significantly reduced exposure level. Two sensing strategies are explored: rapid kV switching with a single-static block/unblock coded aperture, and coded apertures with non-uniform thickness. Both systems rely on coded illumination with a plurality of x-ray spectra created by kV switching or 3D coded apertures. The structured x-ray illumination is projected through the objects of interest and measured with standard x-ray energy integrating detectors. Then, based on the tensor representation of projection data, we develop an algorithm to estimate a full set of synthesized measurements that can be used with standard reconstruction algorithms to accurately recover the object in each energy channel. Simulation and experimental results demonstratemore »
In this research, both image denoising and kidney segmentation tasks are addressed jointly via one multitask deep convolutional network. This multitasking scheme yields better results for both tasks compared to separate single-task methods. Also, to the best of our knowledge, this is a first time attempt at addressing these joint tasks in low-dose CT scans (LDCT). This new network is a conditional generative adversarial network (C-GAN) and is an extension of the image-to-image translation network. To investigate the generalized nature of the network, two other conventional single task networks are also exploited, including the well-known 2D UNet method for segmentation and the more recently proposed method WGAN for LDCT denoising. Implementation results proved that the proposed method outperforms UNet and WGAN for both tasks.