skip to main content


Title: Approximately Hadamard Matrices and Riesz Bases in Random Frames
Abstract

An $n \times n$ matrix with $\pm 1$ entries that acts on ${\mathbb {R}}^{n}$ as a scaled isometry is called Hadamard. Such matrices exist in some, but not all dimensions. Combining number-theoretic and probabilistic tools, we construct matrices with $\pm 1$ entries that act as approximate scaled isometries in ${\mathbb {R}}^{n}$ for all $n \in {\mathbb {N}}$. More precisely, the matrices we construct have condition numbers bounded by a constant independent of $n$.

Using this construction, we establish a phase transition for the probability that a random frame contains a Riesz basis. Namely, we show that a random frame in ${\mathbb {R}}^{n}$ formed by $N$ vectors with independent identically distributed coordinate having a nondegenerate symmetric distribution contains many Riesz bases with high probability provided that $N \ge \exp (Cn)$. On the other hand, we prove that if the entries are sub-Gaussian, then a random frame fails to contain a Riesz basis with probability close to $1$ whenever $N \le \exp (cn)$, where $c<C$ are constants depending on the distribution of the entries.

 
more » « less
NSF-PAR ID:
10490090
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
International Mathematics Research Notices
Volume:
2024
Issue:
3
ISSN:
1073-7928
Format(s):
Medium: X Size: p. 2044-2065
Size(s):
["p. 2044-2065"]
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an unknown feature vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q-$rationality); or irrational numbers taking values in a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-range assumption. Using a novel combination of the Partial Sum of Least Squares (PSLQ) integer relation detection, and the Lenstra-Lenstra-Lov\'asz (LLL) lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-range assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement ($n=1$). In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying the $Q-$rationality property, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even from a single sample ($n=1$). We further establish that for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample ($n=1$) regime, where the sparsity-based methods such as the Least Absolute Shrinkage and Selection Operator (LASSO) and the Basis Pursuit are known to fail. Furthermore, our results also reveal algorithmic connections between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest vector problems. 
    more » « less
  2. We study the rank of a random n × m matrix An, m; k with entries from GF(2), and exactly k unit entries in each column, the other entries being zero. The columns are chosen independently and uniformly at random from the set of all (nk) such columns. We obtain an asymptotically correct estimate for the rank as a function of the number of columns m in terms of c, n, k, and where m = cn/k. The matrix An, m; k forms the vertex-edge incidence matrix of a k-uniform random hypergraph H. The rank of An, m; k can be expressed as follows. Let |C2| be the number of vertices of the 2-core of H, and | E (C2)| the number of edges. Let m* be the value of m for which |C2| = |E(C2)|. Then w.h.p. for m < m* the rank of An, m; k is asymptotic to m, and for m ≥ m* the rank is asymptotic to m – |E(C2)| + |C2|. In addition, assign i.i.d. U[0, 1] weights Xi, i ∊ 1, 2, … m to the columns, and define the weight of a set of columns S as X(S) = ∑j∊S Xj. Define a basis as a set of n – 1 (k even) linearly independent columns. We obtain an asymptotically correct estimate for the minimum weight basis. This generalises the well-known result of Frieze [On the value of a random minimum spanning tree problem, Discrete Applied Mathematics, (1985)] that, for k = 2, the expected length of a minimum weight spanning tree tends to ζ(3) ∼ 1.202. 
    more » « less
  3. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  4. Abstract

    We consider linear systems $Ax = b$ where $A \in \mathbb{R}^{m \times n}$ consists of normalized rows, $\|a_i\|_{\ell ^2} = 1$, and where up to $\beta m$ entries of $b$ have been corrupted (possibly by arbitrarily large numbers). Haddock, Needell, Rebrova & Swartworth propose a quantile-based Random Kaczmarz method and show that for certain random matrices $A$ it converges with high likelihood to the true solution. We prove a deterministic version by constructing, for any matrix $A$, a number $\beta _A$ such that there is convergence for all perturbations with $\beta < \beta _A$. Assuming a random matrix heuristic, this proves convergence for tall Gaussian matrices with up to $\sim 0.5\%$ corruption (a number that can likely be improved).

     
    more » « less
  5. Let M n M_n be drawn uniformly from all ± 1 \pm 1 symmetric n × n n \times n matrices. We show that the probability that M n M_n is singular is at most exp ⁡ ( − c ( n log ⁡ n ) 1 / 2 ) \exp (-c(n\log n)^{1/2}) , which represents a natural barrier in recent approaches to this problem. In addition to improving on the best-known previous bound of Campos, Mattos, Morris and Morrison of exp ⁡ ( − c n 1 / 2 ) \exp (-c n^{1/2}) on the singularity probability, our method is different and considerably simpler: we prove a “rough” inverse Littlewood-Offord theorem by a simple combinatorial iteration. 
    more » « less