Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid. 
                        more » 
                        « less   
                    
                            
                            Stable recovery of planar regions with algebraic boundaries in Bernstein form
                        
                    
    
            Abstract We present a new method for the stable reconstruction of a class of binary images from a small number of measurements. The images we consider are characteristic functions of algebraic domains, that is, domains defined as zero loci of bivariate polynomials, and we assume to know only a finite set of uniform samples for each image. The solution to such a problem can be set up in terms of linear equations associated to a set of image moments. However, the sensitivity of the moments to noise makes the numerical solution highly unstable. To derive a robust image recovery algorithm, we represent algebraic polynomials and the corresponding image moments in terms of bivariate Bernstein polynomials and apply polynomial-generating, refinable sampling kernels. This approach is robust to noise, computationally fast and simple to implement. We illustrate the performance of our reconstruction algorithm from noisy samples through extensive numerical experiments. Our code is released open source and freely available. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10276919
- Date Published:
- Journal Name:
- Advances in Computational Mathematics
- Volume:
- 47
- Issue:
- 2
- ISSN:
- 1019-7168
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Positron emission tomography (PET) is traditionally modeled as discrete systems. Such models may be viewed as piecewise constant approximations of the underlying continuous model for the physical processes and geometry of the PET imaging. Due to the low accuracy of piecewise constant approximations, discrete models introduce an irreducible modeling error which fundamentally limits the quality of reconstructed images. To address this bottleneck, we propose an integral equation model for the PET imaging based on the physical and geometrical considerations, which describes accurately the true coincidences. We show that the proposed integral equation model is equivalent to the existing idealized model in terms of line integrals which is accurate but not suitable for numerical approximation. The proposed model allows us to discretize it using higher accuracy approximation methods. In particular, we discretize the integral equation by using the collocation principle with piecewise linear polynomials. The discretization leads to new ill-conditioned discrete systems for the PET reconstruction, which are further regularized by a novel wavelet-based regularizer. The resulting non-smooth optimization problem is then solved by a preconditioned proximity fixed-point algorithm. Convergence of the algorithm is established for a range of parameters involved in the algorithm. The proposed integral equation model combined with the discretization, regularization, and optimization algorithm provides a new PET image reconstruction method. Numerical results reveal that the proposed model substantially outperforms the conventional discrete model in terms of the consistency to simulated projection data and reconstructed image quality. This indicates that the proposed integral equation model with appropriate discretization and regularizer can significantly reduce modeling errors and suppress noise, which leads to improved image quality and projection data estimation.more » « less
- 
            We present a computationally-efficient and numerically-robust algorithm for finding real roots of polynomials. It begins with determining the intervals where the given polynomial is monotonic. Then, it performs a robust variant of Newton iterations to find the real root within each interval, providing fast and guaranteed convergence and satisfying the given error bound, as permitted by the numerical precision used. For cubic polynomials, the algorithm is more accurate and faster than both the analytical solution and directly applying Newton iterations. It trivially extends to polynomials with arbitrary degrees, but it is limited to finding the real roots only and has quadratic worst-case complexity in terms of the polynomial's degree. We show that our method outperforms alternative polynomial solutions we tested up to degree 20. We also present an example rendering application with a known efficient numerical solution and show that our method provides faster, more accurate, and more robust solutions by solving polynomials of degree 10.more » « less
- 
            We give efficient algorithms for finding power-sum decomposition of an input polynomial with component s. The case of linear s is equivalent to the well-studied tensor decomposition problem while the quadratic case occurs naturally in studying identifiability of non-spherical Gaussian mixtures from low-order moments. Unlike tensor decomposition, both the unique identifiability and algorithms for this problem are not well-understood. For the simplest setting of quadratic s and , prior work of [GHK15] yields an algorithm only when . On the other hand, the more general recent result of [GKS20] builds an algebraic approach to handle any components but only when is large enough (while yielding no bounds for or even ) and only handles an inverse exponential noise. Our results obtain a substantial quantitative improvement on both the prior works above even in the base case of and quadratic s. Specifically, our algorithm succeeds in decomposing a sum of generic quadratic s for and more generally the th power-sum of generic degree- polynomials for any . Our algorithm relies only on basic numerical linear algebraic primitives, is exact (i.e., obtain arbitrarily tiny error up to numerical precision), and handles an inverse polynomial noise when the s have random Gaussian coefficients.more » « less
- 
            Deep neural networks have provided state-of-the-art solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and Plug-and-Play – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    