skip to main content

Title: Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion
We develop a sparse image reconstruction method for polychromatic tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Authors:
;
Award ID(s):
1421480
Publication Date:
NSF-PAR ID:
10012984
Journal Name:
AIP conference proceedings
Volume:
34 1650
ISSN:
0094-243X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop a framework for reconstructing images that are sparse in an appropriate transform domain from polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and incident-energy spectrum are unknown. Assuming that the object that we wish to reconstruct consists of a single material, we obtain a parsimonious measurement-model parameterization by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace-integral form. Themore »mass-attenuation spectrum is then expanded into basis functions using B splines of order one. We consider a Poisson noise model and establish conditions for biconvexity of the corresponding negative log-likelihood (NLL) function with respect to the density-map and mass-attenuation spectrum parameters. We derive a block-coordinate descent algorithm for constrained minimization of a penalized NLL objective function, where penalty terms ensure nonnegativity of the mass-attenuation spline coefficients and nonnegativity and gradient-map sparsity of the density-map image, imposed using a convex total-variation (TV) norm; the resulting objective function is biconvex. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) iteration for updating the image and mass-attenuation spectrum parameters, respectively. We prove the Kurdyka-Łojasiewicz property of the objective function, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Our framework applies to other NLLs and signal-sparsity penalties, such as lognormal NLL and ℓ₁ norm of 2D discrete wavelet transform (DWT) image coefficients. Numerical experiments with simulated and real X-ray CT data demonstrate the performance of the proposed scheme.« less
  2. We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements for single-material objects and express the mass-attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density-map image; themore »image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density-map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPG-BFGS iteration. We also show that the penalized NLL objective satisfies the Kurdyka-Łojasiewicz property, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme.« less
  3. Abstract. The lower-order moments of the drop size distribution (DSD) have generally been considered difficult to retrieve accurately from polarimetric radar data because these data are related to higher-order moments. For example, the 4.6th moment is associated with a specific differential phase and the 6th moment with reflectivity and ratio of high-order moments with differential reflectivity. Thus, conventionally, the emphasis has been to estimate rain rate (3.67th moment) or parameters of the exponential or gamma distribution for the DSD. Many double-moment “bulk” microphysical schemes predict the total number concentration (the 0th moment of the DSD, or M0) and the mixingmore »ratio (or equivalently, the 3rd moment M3). Thus, it is difficult to compare the model outputs directly with polarimetric radar observations or, given the model outputs, forward model the radar observables. This article describes the use of double-moment normalization of DSDs and the resulting stable intrinsic shape that can be fitted by the generalized gamma (G-G) distribution. The two reference moments are M3 and M6, which are shown to be retrievable using the X-band radar reflectivity, differential reflectivity, and specific attenuation (from the iterative correction of measured reflectivity Zh using the total Φdp constraint, i.e., the iterative ZPHI method). Along with the climatological shape parameters of the G-G fit to the scaled/normalized DSDs, the lower-order moments are then retrieved more accurately than possible hitherto. The importance of measuring the complete DSD from 0.1 mm onwards is emphasized using, in our case, an optical array probe with 50 µm resolution collocated with a two-dimensional video disdrometer with about 170 µm resolution. This avoids small drop truncation and hence the accurate calculation of lower-order moments. A case study of a complex multi-cell storm which traversed an instrumented site near the CSU-CHILL radar is described for which the moments were retrieved from radar and compared with directly computed moments from the complete spectrum measurements using the aforementioned two disdrometers. Our detailed validation analysis of the radar-retrieved moments showed relative bias of the moments M0 through M2 was <15 % in magnitude, with Pearson’s correlation coefficient >0.9. Both radar measurement and parameterization errors were estimated rigorously. We show that the temporal variation of the radar-retrieved mass-weighted mean diameter with M0 resulted in coherent “time tracks” that can potentially lead to studies of precipitation evolution that have not been possible so far.« less
  4. We consider the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an unknown feature vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q-$rationality); or irrational numbers taking values in a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-range assumption. Using a novel combination ofmore »the Partial Sum of Least Squares (PSLQ) integer relation detection, and the Lenstra-Lenstra-Lov\'asz (LLL) lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-range assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement ($n=1$). In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying the $Q-$rationality property, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even from a single sample ($n=1$). We further establish that for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample ($n=1$) regime, where the sparsity-based methods such as the Least Absolute Shrinkage and Selection Operator (LASSO) and the Basis Pursuit are known to fail. Furthermore, our results also reveal algorithmic connections between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest vector problems.« less
  5. Abstract In this paper we quantify the temporal variability and image morphology of the horizon-scale emission from Sgr A*, as observed by the EHT in 2017 April at a wavelength of 1.3 mm. We find that the Sgr A* data exhibit variability that exceeds what can be explained by the uncertainties in the data or by the effects of interstellar scattering. The magnitude of this variability can be a substantial fraction of the correlated flux density, reaching ∼100% on some baselines. Through an exploration of simple geometric source models, we demonstrate that ring-like morphologies provide better fits to the Sgrmore »A* data than do other morphologies with comparable complexity. We develop two strategies for fitting static geometric ring models to the time-variable Sgr A* data; one strategy fits models to short segments of data over which the source is static and averages these independent fits, while the other fits models to the full data set using a parametric model for the structural variability power spectrum around the average source structure. Both geometric modeling and image-domain feature extraction techniques determine the ring diameter to be 51.8 ± 2.3 μ as (68% credible intervals), with the ring thickness constrained to have an FWHM between ∼30% and 50% of the ring diameter. To bring the diameter measurements to a common physical scale, we calibrate them using synthetic data generated from GRMHD simulations. This calibration constrains the angular size of the gravitational radius to be 4.8 − 0.7 + 1.4 μ as, which we combine with an independent distance measurement from maser parallaxes to determine the mass of Sgr A* to be 4.0 − 0.6 + 1.1 × 10 6 M ⊙ .« less