skip to main content

Title: Stable recovery of planar regions with algebraic boundaries in Bernstein form
Abstract We present a new method for the stable reconstruction of a class of binary images from a small number of measurements. The images we consider are characteristic functions of algebraic domains, that is, domains defined as zero loci of bivariate polynomials, and we assume to know only a finite set of uniform samples for each image. The solution to such a problem can be set up in terms of linear equations associated to a set of image moments. However, the sensitivity of the moments to noise makes the numerical solution highly unstable. To derive a robust image recovery algorithm, we represent algebraic polynomials and the corresponding image moments in terms of bivariate Bernstein polynomials and apply polynomial-generating, refinable sampling kernels. This approach is robust to noise, computationally fast and simple to implement. We illustrate the performance of our reconstruction algorithm from noisy samples through extensive numerical experiments. Our code is released open source and freely available.  more » « less
Award ID(s):
1720452 1720487
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Advances in Computational Mathematics
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid. 
    more » « less
  2. Positron emission tomography (PET) is traditionally modeled as discrete systems. Such models may be viewed as piecewise constant approximations of the underlying continuous model for the physical processes and geometry of the PET imaging. Due to the low accuracy of piecewise constant approximations, discrete models introduce an irreducible modeling error which fundamentally limits the quality of reconstructed images. To address this bottleneck, we propose an integral equation model for the PET imaging based on the physical and geometrical considerations, which describes accurately the true coincidences. We show that the proposed integral equation model is equivalent to the existing idealized model in terms of line integrals which is accurate but not suitable for numerical approximation. The proposed model allows us to discretize it using higher accuracy approximation methods. In particular, we discretize the integral equation by using the collocation principle with piecewise linear polynomials. The discretization leads to new ill-conditioned discrete systems for the PET reconstruction, which are further regularized by a novel wavelet-based regularizer. The resulting non-smooth optimization problem is then solved by a preconditioned proximity fixed-point algorithm. Convergence of the algorithm is established for a range of parameters involved in the algorithm. The proposed integral equation model combined with the discretization, regularization, and optimization algorithm provides a new PET image reconstruction method. Numerical results reveal that the proposed model substantially outperforms the conventional discrete model in terms of the consistency to simulated projection data and reconstructed image quality. This indicates that the proposed integral equation model with appropriate discretization and regularizer can significantly reduce modeling errors and suppress noise, which leads to improved image quality and projection data estimation. 
    more » « less
  3. Deep neural networks have provided state-of-the-art solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and Plug-and-Play – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing. 
    more » « less
  4. We give efficient algorithms for finding power-sum decomposition of an input polynomial with component s. The case of linear s is equivalent to the well-studied tensor decomposition problem while the quadratic case occurs naturally in studying identifiability of non-spherical Gaussian mixtures from low-order moments. Unlike tensor decomposition, both the unique identifiability and algorithms for this problem are not well-understood. For the simplest setting of quadratic s and , prior work of [GHK15] yields an algorithm only when . On the other hand, the more general recent result of [GKS20] builds an algebraic approach to handle any components but only when is large enough (while yielding no bounds for or even ) and only handles an inverse exponential noise. Our results obtain a substantial quantitative improvement on both the prior works above even in the base case of and quadratic s. Specifically, our algorithm succeeds in decomposing a sum of generic quadratic s for and more generally the th power-sum of generic degree- polynomials for any . Our algorithm relies only on basic numerical linear algebraic primitives, is exact (i.e., obtain arbitrarily tiny error up to numerical precision), and handles an inverse polynomial noise when the s have random Gaussian coefficients. 
    more » « less
  5. Abstract Background

    Lung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin‐sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities.


    The standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error‐prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection.


    A robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC‐IDRI) dataset are used for training, validation, and testing.


    Using only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre‐defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions.


    A pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients.

    more » « less