skip to main content


Title: A Look-Up Table-Based Ray Integration Framework for 2-D/3-D Forward and Back Projection in X-Ray CT
Iterative algorithms have become increasingly popular in Computed Tomography (CT) image reconstruction since they better deal with the adverse image artifacts arising from low radiation dose image acquisition. But iterative methods remain computationally expensive. The main cost emerges in the projection and backprojection operations where accurate CT system modeling can greatly improve the quality of the reconstructed image. We present a framework that improves upon one particular aspect - the accurate projection of the image basis functions. It differs from current methods in that it substitutes the high computational complexity associated with accurate voxel projection by a small number of memory operations. Coefficients are computed in advance and stored in look-up tables parameterized by the CT system's projection geometry. The look-up tables only require a few kilobytes of storage and can be efficiently accelerated on the GPU. We demonstrate our framework with both numerical and clinical experiments and compare its performance with the current state of the art scheme - the separable footprint method.  more » « less
Award ID(s):
1650499
NSF-PAR ID:
10054447
Author(s) / Creator(s):
Date Published:
Journal Name:
IEEE transactions on medical imaging
Volume:
1
Issue:
1
ISSN:
0278-0062
Page Range / eLocation ID:
pp. 99
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Purpose

    Forward and backprojections are the basis of all model‐based iterative reconstruction (MBIR) methods. However, computing these accurately is time‐consuming. In this paper, we present a method for MBIR in parallel X‐ray beam geometry that utilizes a Gram filter to efficiently implement forward and backprojection.

    Methods

    We propose using voxel‐basis and modeling its footprint in a box spline framework to calculate the Gram filter exactly and improve the performance of backprojection. In the special case of parallel X‐ray beam geometry, the forward and backprojection can be implemented by an estimated Gram filter efficiently if the sinogram signal is bandlimited. In this paper, a specialized sinogram interpolation method is proposed to eliminate the bandlimited prerequisite and thus improve the reconstruction accuracy. We build on this idea by utilizing the continuity of the voxel‐basis' footprint, which provides a more accurate sinogram interpolation and further improves the efficiency and quality of backprojection. In addition, the detector blur effect can be efficiently accounted for in our method to better handle realistic scenarios.

    Results

    The proposed method is tested on both phantom and real computed tomography (CT) images under different resolutions, sinogram sampling steps, and noise levels. The proposed method consistently outperforms other state‐of‐the‐art projection models in terms of speed and accuracy for both backprojection and reconstruction.

    Conclusions

    We proposed a iterative reconstruction methodology for 3D parallel‐beam X‐ray CT reconstruction. Our experimental results demonstrate that the proposed methodology is accurate, fast, and reproducible, and outperforms alternative state‐of‐the‐art projection models on both backprojection and reconstruction results significantly.

     
    more » « less
  2. Abstract Purpose

    The constrained one‐step spectral CT image reconstruction (cOSSCIR) algorithm with a nonconvex alternating direction method of multipliers optimizer is proposed for addressing computed tomography (CT) metal artifacts caused by beam hardening, noise, and photon starvation. The quantitative performance of cOSSCIR is investigated through a series of photon‐counting CT simulations.

    Methods

    cOSSCIR directly estimates basis material maps from photon‐counting data using a physics‐based forward model that accounts for beam hardening. The cOSSCIR optimization framework places constraints on the basis maps, which we hypothesize will stabilize the decomposition and reduce streaks caused by noise and photon starvation. Another advantage of cOSSCIR is that the spectral data need not be registered, so that a ray can be used even if some energy window measurements are unavailable. Photon‐counting CT acquisitions of a virtual pelvic phantom with low‐contrast soft tissue texture and bilateral hip prostheses were simulated. Bone and water basis maps were estimated using the cOSSCIR algorithm and combined to form a virtual monoenergetic image for the evaluation of metal artifacts. The cOSSCIR images were compared to a “two‐step” decomposition approach that first estimated basis sinograms using a maximum likelihood algorithm and then reconstructed basis maps using an iterative total variation constrained least‐squares optimization (MLE+TV). Images were also compared to a nonspectral TV reconstruction of the total number of counts detected for each ray with and without normalized metal artifact reduction (NMAR) applied. The simulated metal density was increased to investigate the effects of increasing photon starvation. The quantitative error and standard deviation in regions of the phantom were compared across the investigated algorithms. The ability of cOSSCIR to reproduce the soft‐tissue texture, while reducing metal artifacts, was quantitatively evaluated.

    Results

    Noiseless simulations demonstrated the convergence of the cOSSCIR and MLE+TV algorithms to the correct basis maps in the presence of beam‐hardening effects. When noise was simulated, cOSSCIR demonstrated a quantitative error of −1 HU, compared to 2 HU error for the MLE+TV algorithm and −154 HU error for the nonspectral TV+NMAR algorithm. For the cOSSCIR algorithm, the standard deviation in the central iodine region of interest was 20 HU, compared to 299 HU for the MLE+TV algorithm, 41 HU for the MLE+TV+Mask algorithm that excluded rays through metal, and 55 HU for the nonspectral TV+NMAR algorithm. Increasing levels of photon starvation did not impact the bias or standard deviation of the cOSSCIR images. cOSSCIR was able to reproduce the soft‐tissue texture when an appropriate regularization constraint value was selected.

    Conclusions

    By directly inverting photon‐counting CT data into basis maps using an accurate physics‐based forward model and a constrained optimization algorithm, cOSSCIR avoids metal artifacts due to beam hardening, noise, and photon starvation. The cOSSCIR algorithm demonstrated improved stability and accuracy compared to a two‐step method of decomposition followed by reconstruction.

     
    more » « less
  3. Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid. 
    more » « less
  4. Static coded aperture x-ray tomography was introduced recently where a static illumination pattern is used to interrogate an object with a low radiation dose, from which an accurate 3D reconstruction of the object can be attained computationally. Rather than continuously switching the pattern of illumination with each view angle, as traditionally done, static code computed tomography (CT) places a single pattern for all views. The advantages are many, including the feasibility of practical implementation. This paper generalizes this powerful framework to develop single-scan dual-energy coded aperture spectral tomography that enables material characterization at a significantly reduced exposure level. Two sensing strategies are explored: rapid kV switching with a single-static block/unblock coded aperture, and coded apertures with non-uniform thickness. Both systems rely on coded illumination with a plurality of x-ray spectra created by kV switching or 3D coded apertures. The structured x-ray illumination is projected through the objects of interest and measured with standard x-ray energy integrating detectors. Then, based on the tensor representation of projection data, we develop an algorithm to estimate a full set of synthesized measurements that can be used with standard reconstruction algorithms to accurately recover the object in each energy channel. Simulation and experimental results demonstrate the effectiveness of the proposed cost-effective solution to attain material characterization in low-dose dual-energy CT. 
    more » « less
  5. Abstract Background

    Lung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin‐sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities.

    Purpose

    The standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error‐prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection.

    Methods

    A robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC‐IDRI) dataset are used for training, validation, and testing.

    Results

    Using only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre‐defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions.

    Conclusions

    A pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients.

     
    more » « less