skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Rates of bootstrap approximation for eigenvalues in high-dimensional PCA
In the context of principal components analysis (PCA), the bootstrap is commonly applied to solve a variety of inference problems, such as constructing confidence intervals for the eigenvalues of the population covariance matrix Σ. However, when the data are high-dimensional, there are relatively few theoretical guarantees that quantify the performance of the bootstrap. Our aim in this paper is to analyze how well the bootstrap can approximate the joint distribution of the leading eigenvalues of the sample covariance matrix \hat{Σ}, and we establish non-asymptotic rates of approximation with respect to the multivariate Kolmogorov metric. Under certain assumptions, we show that the bootstrap can achieve a dimension-free rate of r(Σ)/sqrt{n} up to logarithmic factors, where r(Σ) is the effective rank of Σ, and n is the sample size. From a methodological standpoint, we show that applying a transformation to the eigenvalues of \hat{Σ} before bootstrapping is an important consideration in high-dimensional settings.  more » « less
Award ID(s):
1915786
PAR ID:
10469104
Author(s) / Creator(s):
;
Publisher / Repository:
Academia Sinica
Date Published:
Journal Name:
Statistica Sinica
ISSN:
1996-8507
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In this paper, we study the largest eigenvalues of sample covariance matrices with elliptically distributed data. We consider the sample covariance matrix $$Q=YY^{*},$$ where the data matrix $$Y \in \mathbb{R}^{p \times n}$$ contains i.i.d. $$p$$-dimensional observations $$\textbf{y}_{i}=\xi _{i}T\textbf{u}_{i},\;i=1,\dots ,n.$$ Here $$\textbf{u}_{i}$$ is distributed on the unit sphere, $$\xi _{i} \sim \xi $$ is some random variable that is independent of $$\textbf{u}_{i}$$ and $$T^{*}T=\varSigma $$ is some deterministic positive definite matrix. Under some mild regularity assumptions on $$\varSigma ,$$ assuming $$\xi ^{2}$$ has bounded support and certain decay behaviour near its edge so that the limiting spectral distribution of $$Q$$ has a square root decay behaviour near the spectral edge, we prove that the Tracy–Widom law holds for the largest eigenvalues of $$Q$$ when $$p$$ and $$n$$ are comparably large. Based on our results, we further construct some useful statistics to detect the signals when they are corrupted by high dimensional elliptically distributed noise. 
    more » « less
  2. Abstract Covariance matrices are fundamental to the analysis and forecast of economic, physical and biological systems. Although the eigenvalues $$\{\lambda _i\}$$ and eigenvectors $$\{\boldsymbol{u}_i\}$$ of a covariance matrix are central to such endeavours, in practice one must inevitably approximate the covariance matrix based on data with finite sample size $$n$$ to obtain empirical eigenvalues $$\{\tilde{\lambda }_i\}$$ and eigenvectors $$\{\tilde{\boldsymbol{u}}_i\}$$, and therefore understanding the error so introduced is of central importance. We analyse eigenvector error $$\|\boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2$$ while leveraging the assumption that the true covariance matrix having size $$p$$ is drawn from a matrix ensemble with known spectral properties—particularly, we assume the distribution of population eigenvalues weakly converges as $$p\to \infty $$ to a spectral density $$\rho (\lambda )$$ and that the spacing between population eigenvalues is similar to that for the Gaussian orthogonal ensemble. Our approach complements previous analyses of eigenvector error that require the full set of eigenvalues to be known, which can be computationally infeasible when $$p$$ is large. To provide a scalable approach for uncertainty quantification of eigenvector error, we consider a fixed eigenvalue $$\lambda $$ and approximate the distribution of the expected square error $$r= \mathbb{E}\left [\| \boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2\right ]$$ across the matrix ensemble for all $$\boldsymbol{u}_i$$ associated with $$\lambda _i=\lambda $$. We find, for example, that for sufficiently large matrix size $$p$$ and sample size $n> p$, the probability density of $$r$$ scales as $1/nr^2$. This power-law scaling implies that the eigenvector error is extremely heterogeneous—even if $$r$$ is very small for most eigenvectors, it can be large for others with non-negligible probability. We support this and further results with numerical experiments. 
    more » « less
  3. null (Ed.)
    The spiked covariance model has gained increasing popularity in high-dimensional data analysis. A fundamental problem is determination of the number of spiked eigenvalues, K. For estimation of K, most attention has focused on the use of top eigenvalues of sample covariance matrix, and there is little investigation into proper ways of using bulk eigenvalues to estimate K. We propose a principled approach to incorporating bulk eigenvalues in the estimation of K. Our method imposes a working model on the residual covariance matrix, which is assumed to be a diagonal matrix whose entries are drawn from a gamma distribution. Under this model, the bulk eigenvalues are asymptotically close to the quantiles of a fixed parametric distribution. This motivates us to propose a two-step method: the first step uses bulk eigenvalues to estimate parameters of this distribution, and the second step leverages these parameters to assist the estimation of K. The resulting estimator Kˆ aggregates information in a large number of bulk eigenvalues. We show the consistency of Kˆ under a standard spiked covariance model. We also propose a confidence interval estimate for K. Our extensive simulation studies show that the proposed method is robust and outperforms the existing methods in a range of scenarios. We apply the proposed method to analysis of a lung cancer microarray dataset and the 1000 Genomes dataset. 
    more » « less
  4. We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation with nearly optimal sample complexity. Only exponential-time estimators were previously known to achieve this guarantee. Given n samples from a (sub-)Gaussian distribution with unknown mean μ and covariance Σ, our (ε,δ)-differentially private estimator produces μ~ such that ∥μ−μ~∥Σ≤α as long as n≳dα2+dlog1/δ√αε+dlog1/δε. The Mahalanobis error metric ∥μ−μ^∥Σ measures the distance between μ^ and μ relative to Σ; it characterizes the error of the sample mean. Our algorithm runs in time O~(ndω−1+nd/ε), where ω<2.38 is the matrix multiplication exponent. We adapt an exponential-time approach of Brown, Gaboardi, Smith, Ullman, and Zakynthinou (2021), giving efficient variants of stable mean and covariance estimation subroutines that also improve the sample complexity to the nearly optimal bound above. Our stable covariance estimator can be turned to private covariance estimation for unrestricted subgaussian distributions. With n≳d3/2 samples, our estimate is accurate in spectral norm. This is the first such algorithm using n=o(d2) samples, answering an open question posed by Alabi et al. (2022). With n≳d2 samples, our estimate is accurate in Frobenius norm. This leads to a fast, nearly optimal algorithm for private learning of unrestricted Gaussian distributions in TV distance. Duchi, Haque, and Kuditipudi (2023) obtained similar results independently and concurrently. 
    more » « less
  5. We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation with nearly optimal sample complexity. Only exponential-time estimators were previously known to achieve this guarantee. Given n samples from a (sub-)Gaussian distribution with unknown mean μ and covariance Σ, our (ϵ,δ)-differentially private estimator produces μ~ such that ∥μ−μ~∥Σ≤α as long as n≳dα2+dlog1/δ√αϵ+dlog1/δϵ. The Mahalanobis error metric ∥μ−μ^∥Σ measures the distance between μ^ and μ relative to Σ; it characterizes the error of the sample mean. Our algorithm runs in time O~(ndω−1+nd/\eps), where ω<2.38 is the matrix multiplication exponent.We adapt an exponential-time approach of Brown, Gaboardi, Smith, Ullman, and Zakynthinou (2021), giving efficient variants of stable mean and covariance estimation subroutines that also improve the sample complexity to the nearly optimal bound above.Our stable covariance estimator can be turned to private covariance estimation for unrestricted subgaussian distributions. With n≳d3/2 samples, our estimate is accurate in spectral norm. This is the first such algorithm using n=o(d2) samples, answering an open question posed by Alabi et al. (2022). With n≳d2 samples, our estimate is accurate in Frobenius norm. This leads to a fast, nearly optimal algorithm for private learning of unrestricted Gaussian distributions in TV distance.Duchi, Haque, and Kuditipudi (2023) obtained similar results independently and concurrently. 
    more » « less