skip to main content


Title: Phase transition and higher order analysis of Lq regularization under dependence
Abstract

We study the problem of estimating a $k$-sparse signal ${\boldsymbol \beta }_{0}\in{\mathbb{R}}^{p}$ from a set of noisy observations $\mathbf{y}\in{\mathbb{R}}^{n}$ under the model $\mathbf{y}=\mathbf{X}{\boldsymbol \beta }+w$, where $\mathbf{X}\in{\mathbb{R}}^{n\times p}$ is the measurement matrix the row of which is drawn from distribution $N(0,{\boldsymbol \varSigma })$. We consider the class of $L_{q}$-regularized least squares (LQLS) given by the formulation $\hat{{\boldsymbol \beta }}(\lambda )=\text{argmin}_{{\boldsymbol \beta }\in{\mathbb{R}}^{p}}\frac{1}{2}\|\mathbf{y}-\mathbf{X}{\boldsymbol \beta }\|^{2}_{2}+\lambda \|{\boldsymbol \beta }\|_{q}^{q}$, where $\|\cdot \|_{q}$  $(0\le q\le 2)$ denotes the $L_{q}$-norm. In the setting $p,n,k\rightarrow \infty $ with fixed $k/p=\epsilon $ and $n/p=\delta $, we derive the asymptotic risk of $\hat{{\boldsymbol \beta }}(\lambda )$ for arbitrary covariance matrix ${\boldsymbol \varSigma }$ that generalizes the existing results for standard Gaussian design, i.e. $X_{ij}\overset{i.i.d}{\sim }N(0,1)$. The results were derived from the non-rigorous replica method. We perform a higher-order analysis for LQLS in the small-error regime in which the first dominant term can be used to determine the phase transition behavior of LQLS. Our results show that the first dominant term does not depend on the covariance structure of ${\boldsymbol \varSigma }$ in the cases $0\le q\lt 1$ and $1\lt q\le 2,$ which indicates that the correlations among predictors only affect the phase transition curve in the case $q=1$ a.k.a. LASSO. To study the influence of the covariance structure of ${\boldsymbol \varSigma }$ on the performance of LQLS in the cases $0\le q\lt 1$ and $1\lt q\le 2$, we derive the explicit formulas for the second dominant term in the expansion of the asymptotic risk in terms of small error. Extensive computational experiments confirm that our analytical predictions are consistent with numerical results.

 
more » « less
NSF-PAR ID:
10491695
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Information and Inference: A Journal of the IMA
Volume:
13
Issue:
1
ISSN:
2049-8772
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.arXiv:2010.09793) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla $$Lβ,γ=-divDd+1+γ-nassociated to a domain$$\Omega \subset {\mathbb {R}}^n$$ΩRnwith a uniformly rectifiable boundary$$\Gamma $$Γof dimension$$d < n-1$$d<n-1, the now usual distance to the boundary$$D = D_\beta $$D=Dβgiven by$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$Dβ(X)-β=Γ|X-y|-d-βdσ(y)for$$X \in \Omega $$XΩ, where$$\beta >0$$β>0and$$\gamma \in (-1,1)$$γ(-1,1). In this paper we show that the Green functionGfor$$L_{\beta ,\gamma }$$Lβ,γ, with pole at infinity, is well approximated by multiples of$$D^{1-\gamma }$$D1-γ, in the sense that the function$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$|D(ln(GD1-γ))|2satisfies a Carleson measure estimate on$$\Omega $$Ω. We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).

     
    more » « less
  2. Abstract

    A generic out-of-sample error estimate is proposed for $M$-estimators regularized with a convex penalty in high-dimensional linear regression where $(\boldsymbol{X},\boldsymbol{y})$ is observed and the dimension $p$ and sample size $n$ are of the same order. The out-of-sample error estimate enjoys a relative error of order $n^{-1/2}$ in a linear model with Gaussian covariates and independent noise, either non-asymptotically when $p/n\le \gamma $ or asymptotically in the high-dimensional asymptotic regime $p/n\to \gamma ^{\prime}\in (0,\infty )$. General differentiable loss functions $\rho $ are allowed provided that the derivative of the loss is 1-Lipschitz; this includes the least-squares loss as well as robust losses such as the Huber loss and its smoothed versions. The validity of the out-of-sample error estimate holds either under a strong convexity assumption, or for the L1-penalized Huber M-estimator and the Lasso under a sparsity assumption and a bound on the number of contaminated observations. For the square loss and in the absence of corruption in the response, the results additionally yield $n^{-1/2}$-consistent estimates of the noise variance and of the generalization error. This generalizes, to arbitrary convex penalty and arbitrary covariance, estimates that were previously known for the Lasso.

     
    more » « less
  3. Abstract Covariance matrices are fundamental to the analysis and forecast of economic, physical and biological systems. Although the eigenvalues $\{\lambda _i\}$ and eigenvectors $\{\boldsymbol{u}_i\}$ of a covariance matrix are central to such endeavours, in practice one must inevitably approximate the covariance matrix based on data with finite sample size $n$ to obtain empirical eigenvalues $\{\tilde{\lambda }_i\}$ and eigenvectors $\{\tilde{\boldsymbol{u}}_i\}$, and therefore understanding the error so introduced is of central importance. We analyse eigenvector error $\|\boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2$ while leveraging the assumption that the true covariance matrix having size $p$ is drawn from a matrix ensemble with known spectral properties—particularly, we assume the distribution of population eigenvalues weakly converges as $p\to \infty $ to a spectral density $\rho (\lambda )$ and that the spacing between population eigenvalues is similar to that for the Gaussian orthogonal ensemble. Our approach complements previous analyses of eigenvector error that require the full set of eigenvalues to be known, which can be computationally infeasible when $p$ is large. To provide a scalable approach for uncertainty quantification of eigenvector error, we consider a fixed eigenvalue $\lambda $ and approximate the distribution of the expected square error $r= \mathbb{E}\left [\| \boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2\right ]$ across the matrix ensemble for all $\boldsymbol{u}_i$ associated with $\lambda _i=\lambda $. We find, for example, that for sufficiently large matrix size $p$ and sample size $n> p$, the probability density of $r$ scales as $1/nr^2$. This power-law scaling implies that the eigenvector error is extremely heterogeneous—even if $r$ is very small for most eigenvectors, it can be large for others with non-negligible probability. We support this and further results with numerical experiments. 
    more » « less
  4. Abstract

    Approximate integer programming is the following: For a given convex body$$K \subseteq {\mathbb {R}}^n$$KRn, either determine whether$$K \cap {\mathbb {Z}}^n$$KZnis empty, or find an integer point in the convex body$$2\cdot (K - c) +c$$2·(K-c)+cwhich isK, scaled by 2 from its center of gravityc. Approximate integer programming can be solved in time$$2^{O(n)}$$2O(n)while the fastest known methods for exact integer programming run in time$$2^{O(n)} \cdot n^n$$2O(n)·nn. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point$$x^* \in (K \cap {\mathbb {Z}}^n)$$x(KZn)can be found in time$$2^{O(n)}$$2O(n), provided that theremaindersof each component$$x_i^* \mod \ell $$ximodfor some arbitrarily fixed$$\ell \ge 5(n+1)$$5(n+1)of$$x^*$$xare given. The algorithm is based on acutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a$$2^{O(n)}n^n$$2O(n)nnalgorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a newasymmetric approximate Carathéodory theoremthat might be of interest on its own. Our second method concerns integer programming problems in equation-standard form$$Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n$$Ax=b,0xu,xZn. Such a problem can be reduced to the solution of$$\prod _i O(\log u_i +1)$$iO(logui+1)approximate integer programming problems. This implies, for example thatknapsackorsubset-sumproblems withpolynomial variable range$$0 \le x_i \le p(n)$$0xip(n)can be solved in time$$(\log n)^{O(n)}$$(logn)O(n). For these problems, the best running time so far was$$n^n \cdot 2^{O(n)}$$nn·2O(n).

     
    more » « less
  5. Abstract

    When k and s are natural numbers and ${\mathbf h}\in {\mathbb Z}^k$, denote by $J_{s,k}(X;\,{\mathbf h})$ the number of integral solutions of the system $$ \sum_{i=1}^s(x_i^j-y_i^j)=h_j\quad (1\leqslant j\leqslant k), $$ with $1\leqslant x_i,y_i\leqslant X$. When $s\lt k(k+1)/2$ and $(h_1,\ldots ,h_{k-1})\ne {\mathbf 0}$, Brandes and Hughes have shown that $J_{s,k}(X;\,{\mathbf h})=o(X^s)$. In this paper we improve on quantitative aspects of this result, and, subject to an extension of the main conjecture in Vinogradov’s mean value theorem, we obtain an asymptotic formula for $J_{s,k}(X;\,{\mathbf h})$ in the critical case $s=k(k+1)/2$. The latter requires minor arc estimates going beyond square-root cancellation.

     
    more » « less