skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Eigenvector delocalization for non‐Hermitian random matrices and applications
Improving upon results of Rudelson and Vershynin, we establish delocalization bounds for eigenvectors of independent‐entry random matrices. In particular, we show that with high probability every eigenvector is delocalized, meaning any subset of its coordinates carries an appropriate proportion of its mass. Our results hold for random matrices with genuinely complex as well as real entries. As an application of our methods, we also establish delocalization bounds for normal vectors to random hyperplanes. The proofs of our main results rely on a least singular value bound for genuinely complex rectangular random matrices, which generalizes a previous bound due to the first author, and may be of independent interest.  more » « less
Award ID(s):
1702533 1810500
PAR ID:
10139322
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Random Structures & Algorithms
Volume:
57
Issue:
1
ISSN:
1042-9832
Page Range / eLocation ID:
p. 169-210
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We provide high-probability bounds on the condition number of random feature matrices. In particular, we show that if the complexity ratio $N/m$, where $$N$$ is the number of neurons and $$m$$ is the number of data samples, scales like $$\log ^{-1}(N)$$ or $$\log (m)$$, then the random feature matrix is well-conditioned. This result holds without the need of regularization and relies on establishing various concentration bounds between dependent components of the random feature matrix. Additionally, we derive bounds on the restricted isometry constant of the random feature matrix. We also derive an upper bound for the risk associated with regression problems using a random feature matrix. This upper bound exhibits the double descent phenomenon and indicates that this is an effect of the double descent behaviour of the condition number. The risk bounds include the underparameterized setting using the least squares problem and the overparameterized setting where using either the minimum norm interpolation problem or a sparse regression problem. For the noiseless least squares or sparse regression cases, we show that the risk decreases as $$m$$ and $$N$$ increase. The risk bound matches the optimal scaling in the literature and the constants in our results are explicit and independent of the dimension of the data. 
    more » « less
  2. By tightening the conventional Lieb-Robinson bounds to better handle systems that lack translation invariance, we determine the extent to which “weak links” suppress operator growth in disordered one dimensional spin chains. In particular, we prove that ballistic growth is impossible when the distribution of coupling strengths μ(J ) has a sufficiently heavy tail at small J and we identify the correct dynamical exponent to use instead. Furthermore, through a detailed analysis of the special case in which the couplings are genuinely random and independent, we find that the standard formulation of Lieb-Robinson bounds is insufficient to capture the complexity of the dynamics—we must distinguish between bounds that hold for all sites of the chain and bounds that hold for a subsequence of sites and we show by explicit example that these two can have dramatically different behaviors. All the same, our result for the dynamical exponent is tight, in that we prove by counterexample that there cannot exist any Lieb-Robinson bound with a smaller exponent. We close by discussing the implications of our results, both major and minor, for numerous applications ranging from quench dynamics to the structure of ground states. 
    more » « less
  3. This paper expands the analysis of randomized low-rank approximation beyond the Gaussian distribution to four classes of random matrices: (1) independent sub-Gaussian entries, (2) independent sub-Gaussian columns, (3) independent bounded columns, and (4) independent columns with bounded second moment. Using a novel interpretation of the low-rank approximation error involving sample covariance matrices, we provide insight into the requirements of a good random matrix for randomized low-rank approximations. Although our bounds involve unspecified absolute constants (a consequence of underlying nonasymptotic theory of random matrices), they allow for qualitative comparisons across distributions. The analysis offers some details on the minimal number of samples (the number of columns of the random matrix ) and the error in the resulting low-rank approximation. We illustrate our analysis in the context of the randomized subspace iteration method as a representative algorithm for low-rank approximation; however, all the results are broadly applicable to other low-rank approximation techniques. We conclude our discussion with numerical examples using both synthetic and real-world test matrices. 
    more » « less
  4. We develop new techniques for proving lower bounds on the least singular value of random matrices with limited randomness. The matrices we consider have entries that are given by polynomials of a few underlying base random variables. This setting captures a core technical challenge for obtaining smoothed analysis guarantees in many algorithmic settings. Least singular value bounds often involve showing strong anti-concentration inequalities that are intricate and much less understood compared to concentration (or large deviation) bounds. First, we introduce a general technique for proving anti-concentration that uses well-conditionedness properties of the Jacobian of a polynomial map, and show how to combine this with a hierarchical net argument to prove least singular value bounds. Our second tool is a new statement about least singular values to reason about higher-order lifts of smoothed matrices and the action of linear operators on them. Apart from getting simpler proofs of existing smoothed analysis results, we use these tools to now handle more general families of random matrices. This allows us to produce smoothed analysis guarantees in several previously open settings. These new settings include smoothed analysis guarantees for power sum decompositions and certifying robust entanglement of subspaces, where prior work could only establish least singular value bounds for fully random instances or only show non-robust genericity guarantees. 
    more » « less
  5. Abstract The structured singular value (SSV), or , is used to assess the robust stability and performance of an uncertain linear time‐invariant system. Existing algorithms compute upper and lower bounds on the SSV for structured uncertainties that contain repeated (real or complex) scalars and/or nonrepeated complex full‐blocks. This paper presents algorithms to compute bounds on the SSV for the case of repeated complex full‐blocks. This specific class of uncertainty is relevant for the input‐output analysis of many convective systems, such as fluid flows. Specifically, we present a power iteration to compute the SSV lower bound for the case of repeated complex full‐blocks. This generalizes existing power iterations for repeated complex scalars and nonrepeated complex full‐blocks. The upper bound can be formulated as a semi‐definite program (SDP), which we solve using a standard interior‐point method to compute optimal scaling matrices associated with the repeated full‐blocks. Our implementation of the method only requires gradient information, which improves the computational efficiency of the method. Finally, we test our proposed algorithms on an example model of incompressible fluid flow. The proposed methods provide less conservative bounds as compared to prior results, which ignore the repeated full‐block structure. 
    more » « less