skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Dimension of the Exceptional Set in the Aronszajn–Donoghue Theorem for Finite Rank Perturbations
Abstract The classical Aronszajn–Donoghue theorem states that for a rank-one perturbation of a self-adjoint operator (by a cyclic vector) the singular parts of the spectral measures of the original and perturbed operators are mutually singular. As simple direct sum type examples show, this result does not hold for finite rank perturbations. However, the set of exceptional perturbations is pretty small. Namely, for a family of rank $$d$$ perturbations $$A_{\boldsymbol{\alpha }}:= A + {\textbf{B}} {\boldsymbol{\alpha }} {\textbf{B}}^*$$, $${\textbf{B}}:{\mathbb C}^d\to{{\mathcal{H}}}$$, with $${\operatorname{Ran}}{\textbf{B}}$$ being cyclic for $$A$$, parametrized by $$d\times d$$ Hermitian matrices $${\boldsymbol{\alpha }}$$, the singular parts of the spectral measures of $$A$$ and $$A_{\boldsymbol{\alpha }}$$ are mutually singular for all $${\boldsymbol{\alpha }}$$ except for a small exceptional set $$E$$. It was shown earlier by the 1st two authors, see [4], that $$E$$ is a subset of measure zero of the space $$\textbf{H}(d)$$ of $$d\times d$$ Hermitian matrices. In this paper, we show that the set $$E$$ has small Hausdorff dimension, $$\dim E \le \dim \textbf{H}(d)-1 = d^2-1$$.  more » « less
Award ID(s):
1856719
PAR ID:
10244170
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International Mathematics Research Notices
ISSN:
1073-7928
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This paper investigates the spectral properties of Jacobi matrices with limit‐periodic coefficients. We show that generically the spectrum is a Cantor set of zero Lebesgue measure, and the spectral measures are purely singular continuous. For a dense set of limit‐periodic Jacobi matrices, we show that the spectrum is a Cantor set of zero lower box counting dimension while still retaining the singular continuity of the spectral type. We also show how results of this nature can be established by fixing the off‐diagonal coefficients and varying only the diagonal coefficients, and, in a more restricted version, by fixing the diagonal coefficients to be zero and varying only the off‐diagonal coefficients. We apply these results to produce examples of weighted Laplacians on the multidimensional integer lattice having purely singular continuous spectral type and zero‐dimensional spectrum. 
    more » « less
  2. Abstract Covariance matrices are fundamental to the analysis and forecast of economic, physical and biological systems. Although the eigenvalues $$\{\lambda _i\}$$ and eigenvectors $$\{\boldsymbol{u}_i\}$$ of a covariance matrix are central to such endeavours, in practice one must inevitably approximate the covariance matrix based on data with finite sample size $$n$$ to obtain empirical eigenvalues $$\{\tilde{\lambda }_i\}$$ and eigenvectors $$\{\tilde{\boldsymbol{u}}_i\}$$, and therefore understanding the error so introduced is of central importance. We analyse eigenvector error $$\|\boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2$$ while leveraging the assumption that the true covariance matrix having size $$p$$ is drawn from a matrix ensemble with known spectral properties—particularly, we assume the distribution of population eigenvalues weakly converges as $$p\to \infty $$ to a spectral density $$\rho (\lambda )$$ and that the spacing between population eigenvalues is similar to that for the Gaussian orthogonal ensemble. Our approach complements previous analyses of eigenvector error that require the full set of eigenvalues to be known, which can be computationally infeasible when $$p$$ is large. To provide a scalable approach for uncertainty quantification of eigenvector error, we consider a fixed eigenvalue $$\lambda $$ and approximate the distribution of the expected square error $$r= \mathbb{E}\left [\| \boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2\right ]$$ across the matrix ensemble for all $$\boldsymbol{u}_i$$ associated with $$\lambda _i=\lambda $$. We find, for example, that for sufficiently large matrix size $$p$$ and sample size $n> p$, the probability density of $$r$$ scales as $1/nr^2$. This power-law scaling implies that the eigenvector error is extremely heterogeneous—even if $$r$$ is very small for most eigenvectors, it can be large for others with non-negligible probability. We support this and further results with numerical experiments. 
    more » « less
  3. Discrete and continuous frames can be considered as positive operator-valued measures (POVMs) that have integral representations using rank-one operators. However, not every POVM has an integral representation. One goal of this paper is to examine the POVMs that have finite-rank integral representations. More precisely, we present a necessary and sufficient condition under which a positive operator-valued measure $$F: \Omega \to B(H)$$ has an integral representation of the form $$F(E) =\sum_{k=1}^{m} \int_{E}\, G_{k}(\omega)\otimes G_{k}(\omega) d\mu(\omega)$$ for some weakly measurable maps $$G_{k} \ (1\leq k\leq m) $$ from a measurable space $$\Omega$$ to a Hilbert space $$\mathcal{H}$$ and some positive measure $$\mu$$ on $$\Omega$$. Similar characterizations are also obtained for projection-valued measures. As special consequences of our characterization we settle negatively a problem of Ehler and Okoudjou about probability frame representations of probability POVMs, and prove that an integral representable probability POVM can be dilated to a integral representable projection-valued measure if and only if the corresponding measure is purely atomic. 
    more » « less
  4. We develop a unified approach to bounding the largest and smallest singular values of an inhomogeneous random rectangular matrix, based on the non-backtracking operator and the Ihara-Bass formula for general random Hermitian matrices with a bipartite block structure. We obtain probabilistic upper (respectively, lower) bounds for the largest (respectively, smallest) singular values of a large rectangular random matrix X. These bounds are given in terms of the maximal and minimal 2-norms of the rows and columns of the variance profile of X. The proofs involve finding probabilistic upper bounds on the spectral radius of an associated non-backtracking matrix B. The two-sided bounds can be applied to the centered adjacency matrix of sparse inhomogeneous Erd˝os-Rényi bipartite graphs for a wide range of sparsity, down to criticality. In particular, for Erd˝os-Rényi bipartite graphs G(n,m, p) with p = ω(log n)/n, and m/n→ y ∈ (0,1), our sharp bounds imply that there are no outliers outside the support of the Marˇcenko-Pastur law almost surely. This result extends the Bai-Yin theorem to sparse rectangular random matrices. 
    more » « less
  5. Krylov subspace methods are a ubiquitous tool for computing near-optimal rank kk approximations of large matrices. While "large block" Krylov methods with block size at least kk give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such "small block" Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for t iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size ℓ≥kℓ≥k run for O(t/ℓ)O(t/ℓ) iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten pp-norm low-rank approximation. 
    more » « less