 Award ID(s):
 1800251
 NSFPAR ID:
 10315021
 Date Published:
 Journal Name:
 Proceedings of the American Mathematical Society
 Volume:
 149
 Issue:
 748
 ISSN:
 00029939
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

An \ell _p oblivious subspace embedding is a distribution over r \times n matrices \Pi such that for any fixed n \times d matrix A , \[ \Pr _{\Pi }[\textrm {for all }x, \ \Vert Ax\Vert _p \le \Vert \Pi Ax\Vert _p \le \kappa \Vert Ax\Vert _p] \ge 9/10,\] where r is the dimension of the embedding, \kappa is the distortion of the embedding, and for an n dimensional vector y , \Vert y\Vert _p = (\sum _{i=1}^n y_i^p)^{1/p} is the \ell _p norm. Another important property is the sparsity of \Pi , that is, the maximum number of nonzero entries per column, as this determines the running time of computing \Pi A . While for p = 2 there are nearly optimal tradeoffs in terms of the dimension, distortion, and sparsity, for the important case of 1 \le p \lt 2 , much less was known. In this article, we obtain nearly optimal tradeoffs for \ell _1 oblivious subspace embeddings, as well as new tradeoffs for 1 \lt p \lt 2 . Our main results are as follows: (1) We show for every 1 \le p \lt 2 , any oblivious subspace embedding with dimension r has distortion \[ \kappa = \Omega \left(\frac{1}{\left(\frac{1}{d}\right)^{1 / p} \log ^{2 / p}r + \left(\frac{r}{n}\right)^{1 / p  1 / 2}}\right).\] When r = {\operatorname{poly}}(d) \ll n in applications, this gives a \kappa = \Omega (d^{1/p}\log ^{2/p} d) lower bound, and shows the oblivious subspace embedding of Sohler and Woodruff (STOC, 2011) for p = 1 is optimal up to {\operatorname{poly}}(\log (d)) factors. (2) We give sparse oblivious subspace embeddings for every 1 \le p \lt 2 . Importantly, for p = 1 , we achieve r = O(d \log d) , \kappa = O(d \log d) and s = O(\log d) nonzero entries per column. The best previous construction with s \le {\operatorname{poly}}(\log d) is due to Woodruff and Zhang (COLT, 2013), giving \kappa = \Omega (d^2 {\operatorname{poly}}(\log d)) or \kappa = \Omega (d^{3/2} \sqrt {\log n} \cdot {\operatorname{poly}}(\log d)) and r \ge d \cdot {\operatorname{poly}}(\log d) ; in contrast our r = O(d \log d) and \kappa = O(d \log d) are optimal up to {\operatorname{poly}}(\log (d)) factors even for dense matrices. We also give (1) \ell _p oblivious subspace embeddings with an expected 1+\varepsilon number of nonzero entries per column for arbitrarily small \varepsilon \gt 0 , and (2) the first oblivious subspace embeddings for 1 \le p \lt 2 with O(1) distortion and dimension independent of n . Oblivious subspace embeddings are crucial for distributed and streaming environments, as well as entrywise \ell _p lowrank approximation. Our results give improved algorithms for these applications.more » « less

null (Ed.)Abstract We rigorously justify the meanfield limit of an N particle system subject to Brownian motions and interacting through the Newtonian potential in $${\mathbb {R}}^3$$ R 3 . Our result leads to a derivation of the Vlasov–Poisson–Fokker–Planck (VPFP) equations from the regularized microscopic N particle system. More precisely, we show that the maximal distance between the exact microscopic trajectories and the meanfield trajectories is bounded by $$N^{\frac{1}{3}+\varepsilon }$$ N  1 3 + ε ( $$\frac{1}{63}\le \varepsilon <\frac{1}{36}$$ 1 63 ≤ ε < 1 36 ) with a blob size of $$N^{\delta }$$ N  δ ( $$\frac{1}{3}\le \delta <\frac{19}{54}\frac{2\varepsilon }{3}$$ 1 3 ≤ δ < 19 54  2 ε 3 ) up to a probability of $$1N^{\alpha }$$ 1  N  α for any $$\alpha >0$$ α > 0 . Moreover, we prove the convergence rate between the empirical measure associated to the regularized particle system and the solution of the VPFP equations. The technical novelty of this paper is that our estimates rely on the randomness coming from the initial data and from the Brownian motions.more » « less

Abstract Singmaster’s conjecture asserts that every natural number greater than one occurs at most a bounded number of times in Pascal’s triangle; that is, for any natural number $t \geq 2$, the number of solutions to the equation $\binom{n}{m} = t$ for natural numbers $1 \leq m \lt n$ is bounded. In this paper we establish this result in the interior region $\exp(\log^{2/3+\varepsilon} n) \leq m \leq n  \exp(\log^{2/3+\varepsilon} n)$ for any fixed ɛ > 0. Indeed, when t is sufficiently large depending on ɛ, we show that there are at most four solutions (or at most two in either half of Pascal’s triangle) in this region. We also establish analogous results for the equation $(n)_m = t$, where $(n)_m := n(n1) \dots (nm+1)$ denotes the falling factorial.

Abstract We show that the energy conditions are not necessary for boundedness of Riesz transforms in dimension $n\geq 2$. In dimension $n=1$, we construct an elliptic singular integral operator $H_{\flat } $ for which the energy conditions are not necessary for boundedness of $H_{\flat }$. The convolution kernel $K_{\flat }\left ( x\right ) $ of the operator $H_{\flat }$ is a smooth flattened version of the Hilbert transform kernel $K\left ( x\right ) =\frac{1}{x}$ that satisfies ellipticity $ \vert K_{\flat }\left ( x\right ) \vert \gtrsim \frac{1}{\left \vert x\right \vert }$, but not gradient ellipticity $ \vert K_{\flat }^{\prime }\left ( x\right ) \vert \gtrsim \frac{1}{ \vert x \vert ^{2}}$. Indeed the kernel has flat spots where $K_{\flat }^{\prime }\left ( x\right ) =0$ on a family of intervals, but $K_{\flat }^{\prime }\left ( x\right ) $ is otherwise negative on $\mathbb{R}\setminus \left \{ 0\right \} $. On the other hand, if a onedimensional kernel $K\left ( x,y\right ) $ is both elliptic and gradient elliptic, then the energy conditions are necessary, and so by our theorem in [30], the $T1$ theorem holds for such kernels on the line. This paper includes results from arXiv:16079.06071v3 and arXiv:1801.03706v2.

The classic graphical Cheeger inequalities state that if $M$ is an $n\times n$ \emph{symmetric} doubly stochastic matrix, then \[ \frac{1\lambda_{2}(M)}{2}\leq\phi(M)\leq\sqrt{2\cdot(1\lambda_{2}(M))} \] where $\phi(M)=\min_{S\subseteq[n],S\leq n/2}\left(\frac{1}{S}\sum_{i\in S,j\not\in S}M_{i,j}\right)$ is the edge expansion of $M$, and $\lambda_{2}(M)$ is the second largest eigenvalue of $M$. We study the relationship between $\phi(A)$ and the spectral gap $1\re\lambda_{2}(A)$ for \emph{any} doubly stochastic matrix $A$ (not necessarily symmetric), where $\lambda_{2}(A)$ is a nontrivial eigenvalue of $A$ with maximum real part. Fiedler showed that the upper bound on $\phi(A)$ is unaffected, i.e., $\phi(A)\leq\sqrt{2\cdot(1\re\lambda_{2}(A))}$. With regards to the lower bound on $\phi(A)$, there are known constructions with \[ \phi(A)\in\Theta\left(\frac{1\re\lambda_{2}(A)}{\log n}\right), \] indicating that at least a mild dependence on $n$ is necessary to lower bound $\phi(A)$. In our first result, we provide an \emph{exponentially} better construction of $n\times n$ doubly stochastic matrices $A_{n}$, for which \[ \phi(A_{n})\leq\frac{1\re\lambda_{2}(A_{n})}{\sqrt{n}}. \] In fact, \emph{all} nontrivial eigenvalues of our matrices are $0$, even though the matrices are highly \emph{nonexpanding}. We further show that this bound is in the correct range (up to the exponent of $n$), by showing that for any doubly stochastic matrix $A$, \[ \phi(A)\geq\frac{1\re\lambda_{2}(A)}{35\cdot n}. \] As a consequence, unlike the symmetric case, there is a (necessary) loss of a factor of $n^{\alpha}$ for $\frac{1}{2}\leq\alpha\leq1$ in lower bounding $\phi$ by the spectral gap in the nonsymmetric setting. Our second result extends these bounds to general matrices $R$ with nonnegative entries, to obtain a twosided \emph{gapped} refinement of the PerronFrobenius theorem. Recall from the PerronFrobenius theorem that for such $R$, there is a nonnegative eigenvalue $r$ such that all eigenvalues of $R$ lie within the closed disk of radius $r$ about $0$. Further, if $R$ is irreducible, which means $\phi(R)>0$ (for suitably defined $\phi$), then $r$ is positive and all other eigenvalues lie within the \textit{open} disk, so (with eigenvalues sorted by real part), $\re\lambda_{2}(R)
more » « less