Covariance representations are developed for the uniform distributions on the Euclidean spheres in terms of spherical gradients and Hessians. They are applied to derive a number of Sobolev type inequalities and to recover and refine the concentration of measure phenomenon, including second order concentration inequalities. A detail account is also given in the case of the circle, with a short overview of Hoeffding’s kernels and covariance identities in the class of periodic functions.
more »
« less
This content will become publicly available on December 1, 2025
Höffding’s Kernels and Periodic Covariance Representations
We start with a brief survey on the Hoeffding kernels, its properties, related spectral decompositions, and discuss marginal distributions of Hoeffding measures. In the second part of this note, one dimensional covariance representations are considered over compactly supported probability distributions in the class of periodic smooth functions. Hoeffding’s kernels are used in the construction of mixing measures whose marginals are multiples of given probability distributions, leading to optimal kernels in periodic covariance representations.
more »
« less
- Award ID(s):
- 2154001
- PAR ID:
- 10613498
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Journal of Mathematical Sciences
- Volume:
- 286
- Issue:
- 2
- ISSN:
- 1072-3374
- Page Range / eLocation ID:
- 189 to 205
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Model confidence or uncertainty is critical in autonomous systems as they directly tie to the safety and trustworthiness of the system. The quantification of uncertainty in the output decisions of deep neural networks (DNNs) is a challenging problem. The Bayesian framework enables the estimation of the predictive uncertainty by introducing probability distributions over the (unknown) network weights; however, the propagation of these high-dimensional distributions through multiple layers and non-linear transformations is mathematically intractable. In this work, we propose an extended variational inference (eVI) framework for convolutional neural network (CNN) based on tensor Normal distributions (TNDs) defined over convolutional kernels. Our proposed eVI framework propagates the first two moments (mean and covariance) of these TNDs through all layers of the CNN. We employ first-order Taylor series linearization to approximate the mean and covariances passing through the non-linear activations. The uncertainty in the output decision is given by the propagated covariance of the predictive distribution. Furthermore, we show, through extensive simulations on the MNIST and CIFAR-10 datasets, that the CNN becomes more robust to Gaussian noise and adversarial attacks.more » « less
-
Measuring conditional dependence is one of the important tasks in statistical inference and is fundamental in causal discovery, feature selection, dimensionality reduction, Bayesian network learning, and others. In this work, we explore the connection between conditional dependence measures induced by distances on a metric space and reproducing kernels associated with a reproducing kernel Hilbert space (RKHS). For certain distance and kernel pairs, we show the distance-based conditional dependence measures to be equivalent to that of kernel-based measures. On the other hand, we also show that some popular kernel conditional dependence measures based on the Hilbert-Schmidt norm of a certain crossconditional covariance operator, do not have a simple distance representation, except in some limiting cases.more » « less
-
Linearized Wasserstein Barycenters: Synthesis, Analysis, Representational Capacity, and ApplicationsWe propose the extit{linear barycentric coding model (LBCM)} that utilizes the linear optimal transport (LOT) metric for analysis and synthesis of probability measures. We provide a closed-form solution to the variational problem characterizing the probability measures in the LBCM and establish equivalence of the LBCM to the set of Wasserstein-2 barycenters in the special case of compatible measures. Computational methods for synthesizing and analyzing measures in the LBCM are developed with finite sample guarantees. One of our main theoretical contributions is to identify an LBCM, expressed in terms of a simple family, which is sufficient to express all probability measures on the interval [0,1]. We show that a natural analogous construction of an LBCM in ℝ2 fails, and we leave it as an open problem to identify the proper extension in more than one dimension. We conclude by demonstrating the utility of LBCM for covariance estimation and data imputation.more » « less
-
It is shown that the polynuclear growth model is a completely integrable Markov process in the sense that its transition probabilities are given by Fredholm determinants of kernels produced by a scattering transform based on the invariant measures modulo the absolute height, continuous time simple random walks. From the linear evolution of the kernels, it is shown that then-point distributions are determinants ofn\times nmatrices evolving according to thetwo-dimensional non-Abelian Toda lattice.more » « less
An official website of the United States government
