skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Super-resolution limit of the ESPRIT algorithm
The problem of imaging point objects can be formulated as estimation of an unknown atomic measure from its M+1 consecutive noisy Fourier coefficients. The standard resolution of this inverse problem is 1/M and super-resolution refers to the capability of resolving atoms at a higher resolution. When any two atoms are less than 1/M apart, this recovery problem is highly challenging and many existing algorithms either cannot deal with this situation or require restrictive assumptions on the sign of the measure. ESPRIT is an efficient method which does not depend on the sign of the measure. This paper provides an explicit error bound on the support matching distance of ESPRIT in terms of the minimum singular value of Vandermonde matrices. When the support consists of multiple well-separated clumps and noise is sufficiently small, the support error by ESPRIT scales like SRF2λ-2×Noise, where the Super-Resolution Factor (SRF) governs the difficulty of the problem and λ is the cardinality of the largest clump. Our error bound matches the min-max rate of a special model with one clump of closely spaced atoms up to a factor of M in the small noise regime, and therefore establishes the near-optimality of ESPRIT. Our theory is validated by numerical experiments. Keywords: Super-resolution, subspace methods, ESPRIT, stability, uncertainty principle.  more » « less
Award ID(s):
1818751 1934568
PAR ID:
10148672
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Transactions on Information Theory
ISSN:
0018-9448
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of finding an approximate solution to ℓ1 regression while only observing a small number of labels. Given an n×d unlabeled data matrix X, we must choose a small set of m≪n rows to observe the labels of, then output an estimate βˆ whose error on the original problem is within a 1+ε factor of optimal. We show that sampling from X according to its Lewis weights and outputting the empirical minimizer succeeds with probability 1−δ for m>O(1ε2dlogdεδ). This is analogous to the performance of sampling according to leverage scores for ℓ2 regression, but with exponentially better dependence on δ. We also give a corresponding lower bound of Ω(dε2+(d+1ε2)log1δ). 
    more » « less
  2. null (Ed.)
    Abstract We study the ubiquitous super-resolution problem, in which one aims at localizing positive point sources in an image, blurred by the point spread function of the imaging device. To recover the point sources, we propose to solve a convex feasibility program, which simply finds a non-negative Borel measure that agrees with the observations collected by the imaging device. In the absence of imaging noise, we show that solving this convex program uniquely retrieves the point sources, provided that the imaging device collects enough observations. This result holds true if the point spread function of the imaging device can be decomposed into horizontal and vertical components and if the translations of these components form a Chebyshev system, i.e., a system of continuous functions that loosely behave like algebraic polynomials. Building upon the recent results for one-dimensional signals, we prove that this super-resolution algorithm is stable, in the generalized Wasserstein metric, to model mismatch (i.e., when the image is not sparse) and to additive imaging noise. In particular, the recovery error depends on the noise level and how well the image can be approximated with well-separated point sources. As an example, we verify these claims for the important case of a Gaussian point spread function. The proofs rely on the construction of novel interpolating polynomials—which are the main technical contribution of this paper—and partially resolve the question raised in Schiebinger et al. (2017, Inf. Inference, 7, 1–30) about the extension of the standard machinery to higher dimensions. 
    more » « less
  3. Lossy compression has been employed to reduce the unprecedented amount of data produced by today's large-scale scientific simulations and high-resolution instruments. To avoid loss of critical information, state-of-the-art scientific lossy compressors provide error controls on relatively simple metrics such as absolute error bound. However, preserving these metrics does not translate to the preservation of topological features, such as critical points in vector fields. To address this problem, we investigate how to effectively preserve the sign of determinant in error-controlled lossy compression, as it is an important quantity of interest used for the robust detection of many topological features. Our contribution is three-fold. (1) We develop a generic theory to derive the allowable perturbation for one row of a matrix while preserving its sign of the determinant. As a practical use-case, we apply this theory to preserve critical points in vector fields because critical point detection can be reduced to the result of the point-in-simplex test that purely relies on the sign of determinants. (2) We optimize this algorithm with a speculative compression scheme to allow for high compression ratios and efficiently parallelize it in distributed environments. (3) We perform solid experiments with real-world datasets, demonstrating that our method achieves up to 440% improvements in compression ratios over state-of-the-art lossy compressors when all critical points need to be preserved. Using the parallelization strategies, our method delivers up to 1.25X and 4.38X performance speedup in data writing and reading compared with the vanilla approach without compression. 
    more » « less
  4. This paper studies stable recovery of a collection of point sources from its noisy M+1 low-frequency Fourier coefficients. We focus on the super-resolution regime where the minimum separation of the point sources is below 1/M. We propose a separated clumps model where point sources are clustered in far apart sets, and prove an accurate lower bound of the Fourier matrix with nodes restricted to the source locations. This estimate gives rise to a theoretical analysis on the super-resolution limit of the MUSIC algorithm. 
    more » « less
  5. Motivated by estimation of quantum noise models, we study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel. By employing a novel reduction to the "Population Recovery" problem, we give an extremely simple algorithm that learns the Pauli error rates of an n -qubit channel to precision ϵ in ℓ ∞ using just O ( 1 / ϵ 2 ) log ⁡ ( n / ϵ ) applications of the channel. This is optimal up to the logarithmic factors. Our algorithm uses only unentangled state preparation and measurements, and the post-measurement classical runtime is just an O ( 1 / ϵ ) factor larger than the measurement data size. It is also impervious to a limited model of measurement noise where heralded measurement failures occur independently with probability ≤ 1 / 4 .We then consider the case where the noise channel is close to the identity, meaning that the no-error outcome occurs with probability 1 − η . In the regime of small η we extend our algorithm to achieve multiplicative precision 1 ± ϵ (i.e., additive precision ϵ η ) using just O ( 1 ϵ 2 η ) log ⁡ ( n / ϵ ) applications of the channel. 
    more » « less