skip to main content


Title: Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion
We develop a sparse image reconstruction method for polychromatic tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.  more » « less
Award ID(s):
1421480
NSF-PAR ID:
10012984
Author(s) / Creator(s):
;
Date Published:
Journal Name:
AIP conference proceedings
Volume:
34 1650
ISSN:
0094-243X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements for single-material objects and express the mass-attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density-map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density-map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPG-BFGS iteration. We also show that the penalized NLL objective satisfies the Kurdyka-Łojasiewicz property, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme. 
    more » « less
  2. We develop a framework for reconstructing images that are sparse in an appropriate transform domain from polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and incident-energy spectrum are unknown. Assuming that the object that we wish to reconstruct consists of a single material, we obtain a parsimonious measurement-model parameterization by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace-integral form. The mass-attenuation spectrum is then expanded into basis functions using B splines of order one. We consider a Poisson noise model and establish conditions for biconvexity of the corresponding negative log-likelihood (NLL) function with respect to the density-map and mass-attenuation spectrum parameters. We derive a block-coordinate descent algorithm for constrained minimization of a penalized NLL objective function, where penalty terms ensure nonnegativity of the mass-attenuation spline coefficients and nonnegativity and gradient-map sparsity of the density-map image, imposed using a convex total-variation (TV) norm; the resulting objective function is biconvex. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) iteration for updating the image and mass-attenuation spectrum parameters, respectively. We prove the Kurdyka-Łojasiewicz property of the objective function, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Our framework applies to other NLLs and signal-sparsity penalties, such as lognormal NLL and ℓ₁ norm of 2D discrete wavelet transform (DWT) image coefficients. Numerical experiments with simulated and real X-ray CT data demonstrate the performance of the proposed scheme. 
    more » « less
  3. Abstract Background

    Spectral CT material decomposition provides quantitative information but is challenged by the instability of the inversion into basis materials. We have previously proposed the constrained One‐Step Spectral CT Image Reconstruction (cOSSCIR) algorithm to stabilize the material decomposition inversion by directly estimating basis material images from spectral CT data. cOSSCIR was previously investigated on phantom data.

    Purpose

    This study investigates the performance of cOSSCIR using head CT datasets acquired on a clinical photon‐counting CT (PCCT) prototype. This is the first investigation of cOSSCIR for large‐scale, anatomically complex, clinical PCCT data. The cOSSCIR decomposition is preceded by a spectrum estimation and nonlinear counts correction calibration step to address nonideal detector effects.

    Methods

    Head CT data were acquired on an early prototype clinical PCCT system using an edge‐on silicon detector with eight energy bins. Calibration data of a step wedge phantom were also acquired and used to train a spectral model to account for the source spectrum and detector spectral response, and also to train a nonlinear counts correction model to account for pulse pileup effects. The cOSSCIR algorithm optimized the bone and adipose basis images directly from the photon counts data, while placing a grouped total variation (TV) constraint on the basis images. For comparison, basis images were also reconstructed by a two‐step projection‐domain approach of Maximum Likelihood Estimation (MLE) for decomposing basis sinograms, followed by filtered backprojection (MLE + FBP) or a TV minimization algorithm (MLE + TVmin) to reconstruct basis images. We hypothesize that the cOSSCIR approach will provide a more stable inversion into basis images compared to two‐step approaches. To investigate this hypothesis, the noise standard deviation in bone and soft‐tissue regions of interest (ROIs) in the reconstructed images were compared between cOSSCIR and the two‐step methods for a range of regularization constraint settings.

    Results

    cOSSCIR reduced the noise standard deviation in the basis images by a factor of two to six compared to that of MLE + TVmin, when both algorithms were constrained to produce images with the same TV. The cOSSCIR images demonstrated qualitatively improved spatial resolution and depiction of fine anatomical detail. The MLE + TVminalgorithm resulted in lower noise standard deviation than cOSSCIR for the virtual monoenergetic images (VMIs) at higher energy levels and constraint settings, while the cOSSCIR VMIs resulted in lower noise standard deviation at lower energy levels and overall higher qualitative spatial resolution. There were no statistically significant differences in the mean values within the bone region of images reconstructed by the studied algorithms. There were statistically significant differences in the mean values within the soft‐tissue region of the reconstructed images, with cOSSCIR producing mean values closer to the expected values.

    Conclusions

    The cOSSCIR algorithm, combined with our previously proposed spectral model estimation and nonlinear counts correction method, successfully estimated bone and adipose basis images from high resolution, large‐scale patient data from a clinical PCCT prototype. The cOSSCIR basis images were able to depict fine anatomical details with a factor of two to six reduction in noise standard deviation compared to that of the MLE + TVmintwo‐step approach.

     
    more » « less
  4. Coded spectral X-ray computed tomography (CT) based on K-edge filtered illumination is a cost-effective approach to acquire both 3-dimensional structure of objects and their material composition. This approach allows sets of incomplete rays from sparse views or sparse rays with both spatial and spectral encoding to effectively reduce the inspection duration or radiation dose, which is of significance in biological imaging and medical diagnostics. However, reconstruction of spectral CT images from compressed measurements is a nonlinear and ill-posed problem. This paper proposes a material-decomposition-based approach to directly solve the reconstruction problem, without estimating the energy-binned sinograms. This approach assumes that the linear attenuation coefficient map of objects can be decomposed into a few basis materials that are separable in the spectral and space domains. The nonlinear problem is then converted to the reconstruction of the mass density maps of the basis materials. The dimensionality of the optimization variables is thus effectively reduced to overcome the ill-posedness. An alternating minimization scheme is used to solve the reconstruction with regularizations of weighted nuclear norm and total variation. Compared to the state-of-the-art reconstruction method for coded spectral CT, the proposed method can significantly improve the reconstruction quality. It is also capable of reconstructing the spectral CT images at two additional energy bins from the same set of measurements, thus providing more spectral information of the object.

     
    more » « less
  5. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less