skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 1, 2026

Title: Quenched large deviation principles for random projections of lpn balls
Let (kn)n∈N be a sequence of positive integers growing to infinity at a sublinear rate, kn → ∞ and kn/n → 0 as n → ∞. Given a sequence of n-dimensional random vectors {Y (n)}n∈N belonging to a certain class, which includes uniform distributions on suitably scaled ℓnp -balls or ℓnp -spheres, p ≥ 2, and product distributions with sub-Gaussian marginals, we study the large deviations behavior of the corresponding sequence of kn-dimensional orthogonal projections. For almost every sequence of projection matrices, we establish a large deviation principle (LDP) for the corresponding sequence of projections, with a fairly explicit rate function that does not depend on the sequence of projection matrices. As corollaries, we also obtain quenched LDPs for sequences of ℓ2-norms and ℓ∞-norms of the coordinates of the projections. Past work on LDPs for projections with growing dimension has mainly focused on the annealed setting, where one also averages over the random projection matrix, chosen from the Haar measure, in which case the coordinates of the projection are exchangeable. The quenched setting lacks such symmetry properties, and gives rise to significant new challenges in the setting of growing projection dimension. Along the way, we establish new Gaussian approximation results on the Stiefel manifold that may be of independent interest. Such LDPs are of relevance in asymptotic convex geometry, statistical physics and high-dimensional statistics.  more » « less
Award ID(s):
2246838
PAR ID:
10627411
Author(s) / Creator(s):
; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Journal of Functional Analysis
Volume:
289
Issue:
6
ISSN:
0022-1236
Page Range / eLocation ID:
110937
Subject(s) / Keyword(s):
Large deviations Random matrices Convex geometry
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Accurate estimation of tail probabilities of projections of high-dimensional probability measures is of relevance in high-dimensional statistics and asymptotic geometric analysis. Whereas large deviation principles identify the asymptotic exponential decay rate of probabilities, sharp large deviation estimates also provide the “prefactor” in front of the exponentially decaying term. For fixed p ∈ (1, ∞), consider independent sequences (X(n,p))_{n∈N} and (Θ_n)_{n∈N} of random vectors with Θn distributed according to the normalized cone measure on the unit l^n_2 sphere, and X(n,p) distributed according to the normalized cone measure on the unit lnp sphere. For almost every realization (θn)_{n∈N} of (Θ_n)_{n∈N}, (quenched) sharp large deviation estimates are established for suitably normalized (scalar) projections of X(n,p) onto θ_n, that are asymptotically exact (as the dimension n tends to infinity). Furthermore, the case when (X(n,p))_{n∈N} is replaced with (X(n,p))_{n∈N}, where X(n,p) is distributed according to the uniform (or normalized volume) measure on the unit lnp ball, is also considered. In both cases, in contrast to the (quenched) large deviation rate function, the prefactor exhibits a dependence on the projection directions (θ_n)_{n∈N} that encodes additional geometric information that enables one to distinguish between projections of balls and spheres. Moreover, comparison with numerical estimates obtained by direct computation and importance sampling shows that the obtained analytical expressions for tail probabilities provide good approximations even for moderate values of n. The results on the one hand provide more accurate quantitative estimates of tail probabilities of random projections of \ell^n_p spheres than logarithmic asymptotics, and on the other hand, generalize classical sharp large deviation estimates in the spirit of Bahadur and Ranga Rao to a geometric setting. The proofs combine Fourier analytic and probabilistic techniques. Along the way, several results of independent interest are obtained including a simpler representation for the quenched large deviation rate function that shows that it is strictly convex, a central limit theorem for random projections under a certain family of tilted measures, and multidimensional generalized Laplace asymptotics. 
    more » « less
  2. null (Ed.)
    Abstract The study of high-dimensional distributions is of interest in probability theory, statistics, and asymptotic convex geometry, where the object of interest is the uniform distribution on a convex set in high dimensions. The ℓ p -spaces and norms are of particular interest in this setting. In this paper we establish a limit theorem for distributions on ℓ p -spheres, conditioned on a rare event, in a high-dimensional geometric setting. As part of our proof, we establish a certain large deviation principle that is also relevant to the study of the tail behavior of random projections of ℓ p -balls in a high-dimensional Euclidean space. 
    more » « less
  3. Gaussian processes (GPs) provide flexible distributions over functions, with inductive biases controlled by a kernel. However, in many applications Gaussian processes can struggle with even moderate input dimensionality. Learning a low dimensional projection can help alleviate this curse of dimensionality, but introduces many trainable hyperparameters, which can be cumbersome, especially in the small data regime. We use additive sums of kernels for GP regression, where each kernel operates on a different random projection of its inputs. Surprisingly, we find that as the number of random projections increases, the predictive performance of this approach quickly converges to the performance of a kernel operating on the original full dimensional inputs, over a wide range of data sets, even if we are projecting into a single dimension. As a consequence, many problems can remarkably be reduced to one dimensional input spaces, without learning a transformation. We prove this convergence and its rate, and additionally propose a deterministic approach that converges more quickly than purely random projections. Moreover, we demonstrate our approach can achieve faster inference and improved predictive accuracy for high-dimensional inputs compared to kernels in the original input space. 
    more » « less
  4. Abstract We establish sharp tail asymptotics for componentwise extreme values of bivariate Gaussian random vectors with arbitrary correlation between the components. We consider two scaling regimes for the tail event in which we demonstrate the existence of a restricted large deviations principle and identify the unique rate function associated with these asymptotics. Our results identify when the maxima of both coordinates are typically attained by two different versus the same index, and how this depends on the correlation between the coordinates of the bivariate Gaussian random vectors. Our results complement a growing body of work on the extremes of Gaussian processes. The results are also relevant for steady-state performance and simulation analysis of networks of infinite server queues. 
    more » « less
  5. null (Ed.)
    Random projections or sketching are widely used in many algorithmic and learning contexts. Here we study the performance of iterative Hessian sketch for leastsquares problems. By leveraging and extending recent results from random matrix theory on the limiting spectrum of matrices randomly projected with the subsampled randomized Hadamard transform, and truncated Haar matrices, we can study and compare the resulting algorithms to a level of precision that has not been possible before. Our technical contributions include a novel formula for the second moment of the inverse of projected matrices. We also find simple closed-form expressions for asymptotically optimal step-sizes and convergence rates. These show that the convergence rate for Haar and randomized Hadamard matrices are identical, and asymptotically improve upon Gaussian random projections. These techniques may be applied to other algorithms that employ randomized dimension reduction. 
    more » « less