 NSFPAR ID:
 10250940
 Date Published:
 Journal Name:
 Information and Inference: a journal of the IMA
 ISSN:
 20498764
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We consider the statistical connection between the quantized representation of a high dimensional signal X using a random spherical code and the observation of X under an additive white Gaussian noise (AWGN). We show that given X, the conditional Wasserstein distance between its bitrateR quantized version and its observation under AWGN of signaltonoise ratio 2^{2R  1} is sublinear in the problem dimension. We then utilize this fact to connect the mean squared error (MSE) attained by an estimator based on an AWGNcorrupted version of X to the MSE attained by the same estimator when fed with its bitrateR quantized version.more » « less

The number of noisy images required for molecular reconstruction in singleparticle cryoelectron microscopy (cryoEM) is governed by the autocorrelations of the observed, randomly oriented, noisy projection images. In this work, we consider the effect of imposing sparsity priors on the molecule. We use techniques from signal processing, optimization, and applied algebraic geometry to obtain theoretical and computational contributions for this challenging nonlinear inverse problem with sparsity constraints. We prove that molecular structures modeled as sums of Gaussians are uniquely determined by the secondorder autocorrelation of their projection images, implying that the sample complexity is proportional to the square of the variance of the noise. This theory improves upon the nonsparse case, where the thirdorder autocorrelation is required for uniformly oriented particle images and the sample complexity scales with the cube of the noise variance. Furthermore, we build a computational framework to reconstruct molecular structures which are sparse in the wavelet basis. This method combines the sparse representation for the molecule with projectionbased techniques used for phase retrieval in Xray crystallography.more » « less

Deep neural networks have provided stateoftheart solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and PlugandPlay – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the leastsquares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarsetofine gradient ascent procedure for drawing highprobability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain stateoftheart levels of unsupervised performance for deblurring, superresolution, and compressive sensing.more » « less

Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which is used to create topological summaries of data called persistence diagrams. Persistent homology offers a novel method for signal analysis. Herein, we aid interpretation of the sublevel set persistence diagrams of signals by 1) showing the effect of frequency and instantaneous amplitude on the persistence diagrams for a family of deterministic signals, and 2) providing a general equation for the probability density of persistence diagrams of random signals via a pushforward measure. We also provide a topologicallymotivated, efficiently computable statistical descriptor analogous to the power spectral density for signals based on a generalized Bayesian framework for persistence diagrams. This Bayesian descriptor is shown to be competitive with power spectral densities and continuous wavelet transforms at distinguishing signals with different dynamics in a classification problem with autoregressive signals.

Classical statistical mechanics has long relied on assumptions such as the equipartition theorem to understand the behavior of the complicated systems of many particles. The successes of this approach are well known, but there are also many wellknown issues with classical theories. For some of these, the introduction of quantum mechanics is necessary, e.g., the ultraviolet catastrophe. However, more recently, the validity of assumptions such as the equipartition of energy in classical systems was called into question. For instance, a detailed analysis of a simplified model for blackbody radiation was apparently able to deduce the Stefan–Boltzmann law using purely classical statistical mechanics. This novel approach involved a careful analysis of a “metastable” state which greatly delays the approach to equilibrium. In this paper, we perform a broad analysis of such a metastable state in the classical Fermi–Pasta–Ulam–Tsingou (FPUT) models. We treat both the αFPUT and βFPUT models, exploring both quantitative and qualitative behavior. After introducing the models, we validate our methodology by reproducing the wellknown FPUT recurrences in both models and confirming earlier results on how the strength of the recurrences depends on a single system parameter. We establish that the metastable state in the FPUT models can be defined by using a single degreeoffreedom measure—the spectral entropy (η)—and show that this measure has the power to quantify the distance from equipartition. For the αFPUT model, a comparison to the integrable Toda lattice allows us to define rather clearly the lifetime of the metastable state for the standard initial conditions. We next devise a method to measure the lifetime of the metastable state tm in the αFPUT model that reduces the sensitivity to the exact initial conditions. Our procedure involves averaging over random initial phases in the plane of initial conditions, the P1Q1 plane. Applying this procedure gives us a powerlaw scaling for tm, with the important result that the power laws for different system sizes collapse down to the same exponent as Eα2→0. We examine the energy spectrum E(k) over time in the αFPUT model and again compare the results to those of the Toda model. This analysis tentatively supports a method for an irreversible energy dissipation process suggested by Onorato et al.: fourwave and sixwave resonances as described by the “wave turbulence” theory. We next apply a similar approach to the βFPUT model. Here, we explore in particular the different behavior for the two different signs of β. Finally, we describe a procedure for calculating tm in the βFPUT model, a very different task than for the αFPUT model, because the βFPUT model is not a truncation of an integrable nonlinear model.more » « less