We develop a unified approach to bounding the largest and smallest singular values of an inhomogeneous random rectangular matrix, based on the non-backtracking operator and the Ihara-Bass formula for general random Hermitian matrices with a bipartite block structure. We obtain probabilistic upper (respectively, lower) bounds for the largest (respectively, smallest) singular values of a large rectangular random matrix X. These bounds are given in terms of the maximal and minimal 2-norms of the rows and columns of the variance profile of X. The proofs involve finding probabilistic upper bounds on the spectral radius of an associated non-backtracking matrix B. The two-sided bounds can be applied to the centered adjacency matrix of sparse inhomogeneous Erd˝os-Rényi bipartite graphs for a wide range of sparsity, down to criticality. In particular, for Erd˝os-Rényi bipartite graphs G(n,m, p) with p = ω(log n)/n, and m/n→ y ∈ (0,1), our sharp bounds imply that there are no outliers outside the support of the Marˇcenko-Pastur law almost surely. This result extends the Bai-Yin theorem to sparse rectangular random matrices.
more »
« less
Spectral Radii of Products of Random Rectangular Matrices
We consider m independent random rectangular matrices whose entries are independent and identically distributed standard complex Gaussian random variables. Assume the product of the m rectangular matrices is an n-by-n square matrix. The maximum absolute value of the n eigenvalues of the product matrix is called spectral radius. In this paper, we study the limiting spectral radii of the product when m changes with n and can even diverge. We give a complete description for the limiting distribution of the spectral radius. Our results reduce to those in Jiang and Qi (J Theor Probab 30(1):326–364, 2017) when the rectangular matrices are square.
more »
« less
- Award ID(s):
- 1916014
- PAR ID:
- 10168688
- Date Published:
- Journal Name:
- Journal of Theoretical Probability
- ISSN:
- 0894-9840
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In this paper, we study the largest eigenvalues of sample covariance matrices with elliptically distributed data. We consider the sample covariance matrix $$Q=YY^{*},$$ where the data matrix $$Y \in \mathbb{R}^{p \times n}$$ contains i.i.d. $$p$$-dimensional observations $$\textbf{y}_{i}=\xi _{i}T\textbf{u}_{i},\;i=1,\dots ,n.$$ Here $$\textbf{u}_{i}$$ is distributed on the unit sphere, $$\xi _{i} \sim \xi $$ is some random variable that is independent of $$\textbf{u}_{i}$$ and $$T^{*}T=\varSigma $$ is some deterministic positive definite matrix. Under some mild regularity assumptions on $$\varSigma ,$$ assuming $$\xi ^{2}$$ has bounded support and certain decay behaviour near its edge so that the limiting spectral distribution of $$Q$$ has a square root decay behaviour near the spectral edge, we prove that the Tracy–Widom law holds for the largest eigenvalues of $$Q$$ when $$p$$ and $$n$$ are comparably large. Based on our results, we further construct some useful statistics to detect the signals when they are corrupted by high dimensional elliptically distributed noise.more » « less
-
We present a new approach, based on graphon theory, to finding the limiting spectral distributions of general Wigner‐type matrices. This approach determines the moments of the limiting measures and the equations of their Stieltjes transforms explicitly with weaker assumptions on the convergence of variance profiles than previous results. As applications, we give a new proof of the semicircle law for generalized Wigner matrices and determine the limiting spectral distributions for three sparse inhomogeneous random graph models with sparsityω(1/n): inhomogeneous random graphs with roughly equal expected degrees,W‐random graphs and stochastic block models with a growing number of blocks. Furthermore, we show our theorems can be applied to random Gram matrices with a variance profile for which we can find the limiting spectral distributions under weaker assumptions than previous results.more » « less
-
null (Ed.)Abstract We show that a nearly square independent and identically distributed random integral matrix is surjective over the integral lattice with very high probability. This answers a question by Koplewitz [6]. Our result extends to sparse matrices as well as to matrices of dependent entries.more » « less
-
Abstract We provide high-probability bounds on the condition number of random feature matrices. In particular, we show that if the complexity ratio $N/m$, where $$N$$ is the number of neurons and $$m$$ is the number of data samples, scales like $$\log ^{-1}(N)$$ or $$\log (m)$$, then the random feature matrix is well-conditioned. This result holds without the need of regularization and relies on establishing various concentration bounds between dependent components of the random feature matrix. Additionally, we derive bounds on the restricted isometry constant of the random feature matrix. We also derive an upper bound for the risk associated with regression problems using a random feature matrix. This upper bound exhibits the double descent phenomenon and indicates that this is an effect of the double descent behaviour of the condition number. The risk bounds include the underparameterized setting using the least squares problem and the overparameterized setting where using either the minimum norm interpolation problem or a sparse regression problem. For the noiseless least squares or sparse regression cases, we show that the risk decreases as $$m$$ and $$N$$ increase. The risk bound matches the optimal scaling in the literature and the constants in our results are explicit and independent of the dimension of the data.more » « less
An official website of the United States government

