skip to main content


Title: Super-resolution limit of the ESPRIT algorithm
The problem of imaging point objects can be formulated as estimation of an unknown atomic measure from its M+1 consecutive noisy Fourier coefficients. The standard resolution of this inverse problem is 1/M and super-resolution refers to the capability of resolving atoms at a higher resolution. When any two atoms are less than 1/M apart, this recovery problem is highly challenging and many existing algorithms either cannot deal with this situation or require restrictive assumptions on the sign of the measure. ESPRIT is an efficient method which does not depend on the sign of the measure. This paper provides an explicit error bound on the support matching distance of ESPRIT in terms of the minimum singular value of Vandermonde matrices. When the support consists of multiple well-separated clumps and noise is sufficiently small, the support error by ESPRIT scales like SRF2λ-2×Noise, where the Super-Resolution Factor (SRF) governs the difficulty of the problem and λ is the cardinality of the largest clump. Our error bound matches the min-max rate of a special model with one clump of closely spaced atoms up to a factor of M in the small noise regime, and therefore establishes the near-optimality of ESPRIT. Our theory is validated by numerical experiments. Keywords: Super-resolution, subspace methods, ESPRIT, stability, uncertainty principle.  more » « less
Award ID(s):
1818751 1934568
NSF-PAR ID:
10148672
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Transactions on Information Theory
ISSN:
0018-9448
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract We study the ubiquitous super-resolution problem, in which one aims at localizing positive point sources in an image, blurred by the point spread function of the imaging device. To recover the point sources, we propose to solve a convex feasibility program, which simply finds a non-negative Borel measure that agrees with the observations collected by the imaging device. In the absence of imaging noise, we show that solving this convex program uniquely retrieves the point sources, provided that the imaging device collects enough observations. This result holds true if the point spread function of the imaging device can be decomposed into horizontal and vertical components and if the translations of these components form a Chebyshev system, i.e., a system of continuous functions that loosely behave like algebraic polynomials. Building upon the recent results for one-dimensional signals, we prove that this super-resolution algorithm is stable, in the generalized Wasserstein metric, to model mismatch (i.e., when the image is not sparse) and to additive imaging noise. In particular, the recovery error depends on the noise level and how well the image can be approximated with well-separated point sources. As an example, we verify these claims for the important case of a Gaussian point spread function. The proofs rely on the construction of novel interpolating polynomials—which are the main technical contribution of this paper—and partially resolve the question raised in Schiebinger et al. (2017, Inf. Inference, 7, 1–30) about the extension of the standard machinery to higher dimensions. 
    more » « less
  2. ABSTRACT We examine the nature of kpc-scale clumps seen in high-redshift galaxies using a suite of cosmological simulations of galaxy formation. We identify rest-frame UV clumps in mock HST images smoothed to 500 pc resolution, and compare them with the intrinsic 3D clumps of young stars identified in the simulations with 100 pc resolution. According to this comparison for the progenitors of Milky Way-sized galaxies probed by our simulations, we expect that the stellar masses of the observed clumps are overestimated by as much as an order of magnitude, and that the sizes of these clumps are also overestimated by factor of several, due to a combination of spatial resolution and projection. The masses of young stars contributing most of the UV emission can also be overestimated by factor of a few. We find that most clumps of young stars present in a simulation at one time dissolve on a timescale shorter than ∼150 Myr. Some clumps with dense cores can last longer but eventually disperse. Most of the clumps are not bound structures, with virial parameter αvir > 1. We find similar results for clumps identified in mock maps of H α emission measure. We examine the predictions for effective clump sizes from the linear theory of gravitational perturbations and conclude that they are inconsistent with being formed by global disc instabilities. Instead, the observed clumps represent random projections of multiple compact star-forming regions. 
    more » « less
  3. We prove the endpoint case of a conjecture of Khot and Moshkovitz related to the unique games conjecture, less a small error. Letn ≥ 2. Suppose a subset Ω ofn‐dimensional Euclidean spacesatisfies −Ω = Ωcand Ω + v = Ωc(up to measure zero sets) for every standard basis vector. For anyand for anyq ≥ 1, letand let. For anyx ∈ Ω, letN(x) denote the exterior normal vector atxsuch that ‖N(x)‖2 = 1. Let. Our main result shows thatBhas the smallest Gaussian surface area among all such subsets Ω, less a small error:In particular,Standard arguments extend these results to a corresponding weak inequality for noise stability. Removing the factor 6 × 10−9would prove the endpoint case of the Khot‐Moshkovitz conjecture. Lastly, we prove a Euclidean analogue of the Khot and Moshkovitz conjecture. The full conjecture of Khot and Moshkovitz provides strong evidence for the truth of the unique games conjecture, a central conjecture in theoretical computer science that is closely related to the P versus NP problem. So, our results also provide evidence for the truth of the unique games conjecture. Nevertheless, this paper does not prove any case of the unique games conjecture.

     
    more » « less
  4. We study local symmetry breaking problems in the Congest model, focusing on ruling set problems, which generalize the fundamental Maximal Independent Set (MIS) problem. The time (round) complexity of MIS (and ruling sets) have attracted much attention in the Local model. Indeed, recent results (Barenboim et al., FOCS 2012, Ghaffari SODA 2016) for the MIS problem have tried to break the long-standing O(log n)-round "barrier" achieved by Luby's algorithm, but these yield o(log n)-round complexity only when the maximum degree Delta is somewhat small relative to n. More importantly, these results apply only in the Local model. In fact, the best known time bound in the Congest model is still O(log n) (via Luby's algorithm) even for moderately small Delta (i.e., for Delta = Omega(log n) and Delta = o(n)). Furthermore, message complexity has been largely ignored in the context of local symmetry breaking. Luby's algorithm takes O(m) messages on m-edge graphs and this is the best known bound with respect to messages. Our work is motivated by the following central question: can we break the Theta(log n) time complexity barrier and the Theta(m) message complexity barrier in the Congest model for MIS or closely-related symmetry breaking problems? This paper presents progress towards this question for the distributed ruling set problem in the Congest model. A beta-ruling set is an independent set such that every node in the graph is at most beta hops from a node in the independent set. We present the following results: - Time Complexity: We show that we can break the O(log n) "barrier" for 2- and 3-ruling sets. We compute 3-ruling sets in O(log n/log log n) rounds with high probability (whp). More generally we show that 2-ruling sets can be computed in O(log Delta (log n)^(1/2 + epsilon) + log n/log log n) rounds for any epsilon > 0, which is o(log n) for a wide range of Delta values (e.g., Delta = 2^(log n)^(1/2-epsilon)). These are the first 2- and 3-ruling set algorithms to improve over the O(log n)-round complexity of Luby's algorithm in the Congest model. - Message Complexity: We show an Omega(n^2) lower bound on the message complexity of computing an MIS (i.e., 1-ruling set) which holds also for randomized algorithms and present a contrast to this by showing a randomized algorithm for 2-ruling sets that, whp, uses only O(n log^2 n) messages and runs in O(Delta log n) rounds. This is the first message-efficient algorithm known for ruling sets, which has message complexity nearly linear in n (which is optimal up to a polylogarithmic factor). 
    more » « less
  5. We study local symmetry breaking problems in the Congest model, focusing on ruling set problems, which generalize the fundamental Maximal Independent Set (MIS) problem. The time (round) complexity of MIS (and ruling sets) have attracted much attention in the Local model. Indeed, recent results (Barenboim et al., FOCS 2012, Ghaffari SODA 2016) for the MIS problem have tried to break the long-standing O(log n)-round “barrier” achieved by Luby’s algorithm, but these yield o(log n)-round complexity only when the maximum degree  is somewhat small relative to n. More importantly, these results apply only in the Local model. In fact, the best known time bound in the Congest model is still O(log n) (via Luby’s algorithm) even for moderately small  (i.e., for  = (log n) and  = o(n)). Furthermore, message complexity has been largely ignored in the context of local symmetry breaking. Luby’s algorithm takes O(m) messages on m-edge graphs and this is the best known bound with respect to messages. Our work is motivated by the following central question: can we break the (log n) time complexity barrier and the (m) message complexity barrier in the Congest model for MIS or closelyrelated symmetry breaking problems? This paper presents progress towards this question for the distributed ruling set problem in the Congest model. A -ruling set is an independent set such that every node in the graph is at most hops from a node in the independent set. We present the following results: Time Complexity: We show that we can break the O(log n) “barrier” for 2- and 3-ruling sets. We compute 3-ruling sets in O  log n log log n  rounds with high probability (whp). More generally we show that 2-ruling sets can be computed in O  log · (log n)1/2+" + log n log log n  rounds for any " > 0, which is o(log n) for a wide range of  values (e.g.,  = 2(log n)1/2−" ). These are the first 2- and 3-ruling set algorithms to improve over the O(log n)-round complexity of Luby’s algorithm in the Congest model. Message Complexity: We show an (n2) lower bound on the message complexity of computing an MIS (i.e., 1-ruling set) which holds also for randomized algorithms and present a contrast to this by showing a randomized algorithm for 2-ruling sets that, whp, uses only O(n log2 n) messages and runs in O( log n) rounds. This is the first message-efficient algorithm known for ruling sets, which has message complexity nearly linear in n (which is optimal up to a polylogarithmic factor). 
    more » « less