skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Overdispersed Photon-Limited Sparse Signal Recovery Using Nonconvex Regularization
This paper investigates the application of the ℓp quasinorm, where 0 < p < 1, in contexts characterized by photon-limited signals such as medical imaging and night vision. In these environments, low-photon count images have typically been modeled using Poisson statistics. In related algorithms, the ℓ1 norm is commonly employed as a regularization method to promotes sparsity in the reconstruction. However, recent research suggests that using the ℓp quasi-norm may yield lower error results. In this paper, we investigate the use of negative binomial statistics, which are more general models than Poisson models, in conjunction with the ℓp quasi-norm for recovering sparse signals in low-photon count imaging settings.  more » « less
Award ID(s):
1840265 1741490
PAR ID:
10505324
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
2023 IEEE 9th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
ISBN:
979-8-3503-4452-3
Page Range / eLocation ID:
191 to 195
Format(s):
Medium: X
Location:
Herradura, Costa Rica
Sponsoring Org:
National Science Foundation
More Like this
  1. Low-photon count imaging has been typically modeled by Poisson statistics. This discrete probability distribution model assumes that the mean and variance of a signal are equal. In the presence of greater variability in a dataset than what is expected, the negative binomial distribution is a suitable overdispersed alternative to the Poisson distribution. In this work, we present a framework for reconstructing sparse signals in these low-count overdispersed settings. Specifically, we describe a gradient-based sequential quadratic optimization approach that minimizes the negative log-likelihood corresponding to the negative binomial distribution coupled with a sparsity-promoting regularization term. Numerical experiments on 1D and 2D sparse/compressible signals are presented. 
    more » « less
  2. Abstract This paper presents a new statistical method that enables the use of systematic errors in the maximum-likelihood regression of integer-count Poisson data to a parametric model. The method is primarily aimed at the characterization of the goodness-of-fit statistic in the presence of the over-dispersion that is induced by sources of systematic error, and is based on a quasi-maximum-likelihood method that retains the Poisson distribution of the data. We show that the Poisson deviance, which is the usual goodness-of-fit statistic and that is commonly referred to in astronomy as the Cash statistics, can be easily generalized in the presence of systematic errors, under rather general conditions. The method and the associated statistics are first developed theoretically, and then they are tested with the aid of numerical simulations and further illustrated with real-life data from astronomical observations. The statistical methods presented in this paper are intended as a simple general-purpose framework to include additional sources of uncertainty for the analysis of integer-count data in a variety of practical data analysis situations. 
    more » « less
  3. This study addresses the challenge of reconstructing sparse signals, a frequent occurrence in the context of overdispersed photon-limited imaging. While the noise behavior in such imaging settings is typically modeled using a Poisson distribution, the negative binomial distribution is more suitable in overdispersed scenarios where the noise variance exceeds the signal mean. Knowledge of the maximum and minimum signal intensity can be effectively utilized within the computational framework to enhance the accuracy of signal reconstruction. In this paper, we use a gradient-based method for sparse signal recovery that leverages a negative binomial distribution for noise modeling, enforces bound constraints to adhere to upper and lower signal intensity thresholds, and employs a sparsity-promoting regularization term. The numerical experiments we present demonstrate that the incorporation of these features significantly improves the reconstruction of sparse signals from overdispersed measurements. 
    more » « less
  4. null (Ed.)
    Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against ℓ2-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d.~smoothing distributions, we prove that the largest ℓp-radius that can be certified decreases as O(1/d12−1p) with dimension d for p>2. Notably, for p≥2, this dependence on d is no better than that of the ℓp-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to {\it generalized} Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when p≥2. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an ℓ1 or an ℓ∞-norm ball, we show upper bounds of the form O(1/d) and O(1/d1−1p) respectively, which have an even worse dependence on d. 
    more » « less
  5. Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against ℓ2-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d. smoothing distributions, we prove that the largest ℓp-radius that can be certified decreases as O(1/d12−1p) with dimension d for p>2. Notably, for p≥2, this dependence on d is no better than that of the ℓp-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to generalized Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when p≥2. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an ℓ1 or an ℓ∞-norm ball, we show upper bounds of the form O(1/d) and O(1/d1−1p) respectively, which have an even worse dependence on d. 
    more » « less