skip to main content

Title: Wavelet invariants for statistically robust multi-reference alignment
We propose a nonlinear, wavelet-based signal representation that is translation invariant and robust to both additive noise and random dilations. Motivated by the multi-reference alignment problem and generalizations thereof, we analyze the statistical properties of this representation given a large number of independent corruptions of a target signal. We prove the nonlinear wavelet-based representation uniquely defines the power spectrum but allows for an unbiasing procedure that cannot be directly applied to the power spectrum. After unbiasing the representation to remove the effects of the additive noise and random dilations, we recover an approximation of the power spectrum by solving a convex optimization problem, and thus reduce to a phase retrieval problem. Extensive numerical experiments demonstrate the statistical robustness of this approximation procedure.  more » « less
Award ID(s):
1912906 2131292 1845856
Author(s) / Creator(s):
Date Published:
Journal Name:
Information and Inference: a journal of the IMA
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the statistical connection between the quantized representation of a high dimensional signal X using a random spherical code and the observation of X under an additive white Gaussian noise (AWGN). We show that given X, the conditional Wasserstein distance between its bitrate-R quantized version and its observation under AWGN of signal-to-noise ratio 2^{2R - 1} is sub-linear in the problem dimension. We then utilize this fact to connect the mean squared error (MSE) attained by an estimator based on an AWGN-corrupted version of X to the MSE attained by the same estimator when fed with its bitrate-R quantized version. 
    more » « less
  2. The number of noisy images required for molecular reconstruction in single-particle cryoelectron microscopy (cryo-EM) is governed by the autocorrelations of the observed, randomly oriented, noisy projection images. In this work, we consider the effect of imposing sparsity priors on the molecule. We use techniques from signal processing, optimization, and applied algebraic geometry to obtain theoretical and computational contributions for this challenging nonlinear inverse problem with sparsity constraints. We prove that molecular structures modeled as sums of Gaussians are uniquely determined by the second-order autocorrelation of their projection images, implying that the sample complexity is proportional to the square of the variance of the noise. This theory improves upon the nonsparse case, where the third-order autocorrelation is required for uniformly oriented particle images and the sample complexity scales with the cube of the noise variance. Furthermore, we build a computational framework to reconstruct molecular structures which are sparse in the wavelet basis. This method combines the sparse representation for the molecule with projection-based techniques used for phase retrieval in X-ray crystallography. 
    more » « less
  3. Deep neural networks have provided state-of-the-art solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and Plug-and-Play – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing. 
    more » « less
  4. Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which is used to create topological summaries of data called persistence diagrams. Persistent homology offers a novel method for signal analysis. Herein, we aid interpretation of the sublevel set persistence diagrams of signals by 1) showing the effect of frequency and instantaneous amplitude on the persistence diagrams for a family of deterministic signals, and 2) providing a general equation for the probability density of persistence diagrams of random signals via a pushforward measure. We also provide a topologically-motivated, efficiently computable statistical descriptor analogous to the power spectral density for signals based on a generalized Bayesian framework for persistence diagrams. This Bayesian descriptor is shown to be competitive with power spectral densities and continuous wavelet transforms at distinguishing signals with different dynamics in a classification problem with autoregressive signals.

    more » « less
  5. Classical statistical mechanics has long relied on assumptions such as the equipartition theorem to understand the behavior of the complicated systems of many particles. The successes of this approach are well known, but there are also many well-known issues with classical theories. For some of these, the introduction of quantum mechanics is necessary, e.g., the ultraviolet catastrophe. However, more recently, the validity of assumptions such as the equipartition of energy in classical systems was called into question. For instance, a detailed analysis of a simplified model for blackbody radiation was apparently able to deduce the Stefan–Boltzmann law using purely classical statistical mechanics. This novel approach involved a careful analysis of a “metastable” state which greatly delays the approach to equilibrium. In this paper, we perform a broad analysis of such a metastable state in the classical Fermi–Pasta–Ulam–Tsingou (FPUT) models. We treat both the α-FPUT and β-FPUT models, exploring both quantitative and qualitative behavior. After introducing the models, we validate our methodology by reproducing the well-known FPUT recurrences in both models and confirming earlier results on how the strength of the recurrences depends on a single system parameter. We establish that the metastable state in the FPUT models can be defined by using a single degree-of-freedom measure—the spectral entropy (η)—and show that this measure has the power to quantify the distance from equipartition. For the α-FPUT model, a comparison to the integrable Toda lattice allows us to define rather clearly the lifetime of the metastable state for the standard initial conditions. We next devise a method to measure the lifetime of the metastable state tm in the α-FPUT model that reduces the sensitivity to the exact initial conditions. Our procedure involves averaging over random initial phases in the plane of initial conditions, the P1-Q1 plane. Applying this procedure gives us a power-law scaling for tm, with the important result that the power laws for different system sizes collapse down to the same exponent as Eα2→0. We examine the energy spectrum E(k) over time in the α-FPUT model and again compare the results to those of the Toda model. This analysis tentatively supports a method for an irreversible energy dissipation process suggested by Onorato et al.: four-wave and six-wave resonances as described by the “wave turbulence” theory. We next apply a similar approach to the β-FPUT model. Here, we explore in particular the different behavior for the two different signs of β. Finally, we describe a procedure for calculating tm in the β-FPUT model, a very different task than for the α-FPUT model, because the β-FPUT model is not a truncation of an integrable nonlinear model. 
    more » « less