We study the $$\ell_p$$ regression problem, which requires finding $$\mathbf{x}\in\mathbb R^{d}$$ that minimizes $$\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_p$$ for a matrix $$\mathbf{A}\in\mathbb R^{n \times d}$$ and response vector $$\mathbf{b}\in\mathbb R^{n}$$. There has been recent interest in developing subsampling methods for this problem that can outperform standard techniques when $$n$$ is very large. However, all known subsampling approaches have run time that depends exponentially on $$p$$, typically, $$d^{\mathcal{O}(p)}$$, which can be prohibitively expensive. We improve on this work by showing that for a large class of common \emph{structured matrices}, such as combinations of low-rank matrices, sparse matrices, and Vandermonde matrices, there are subsampling based methods for $$\ell_p$$ regression that depend polynomially on $$p$$. For example, we give an algorithm for $$\ell_p$$ regression on Vandermonde matrices that runs in time $$\mathcal{O}(n\log^3 n+(dp^2)^{0.5+\omega}\cdot\text{polylog}\,n)$$, where $$\omega$$ is the exponent of matrix multiplication. The polynomial dependence on $$p$$ crucially allows our algorithms to extend naturally to efficient algorithms for $$\ell_\infty$$ regression, via approximation of $$\ell_\infty$$ by $$\ell_{\mathcal{O}(\log n)}$$. Of practical interest, we also develop a new subsampling algorithm for $$\ell_p$$ regression for arbitrary matrices, which is simpler than previous approaches for $$p \ge 4$$.
more »
« less
Corrigendum to “On two conjectures concerning convex curves” by V. Sedykh and B. Shapiro
As was pointed out by S. Karp, Theorem B of paper [V. Sedykh and B. Shapiro, On two conjectures concerning convex curves, Int. J. Math. 16(10) (2005) 1157–1173] is wrong. Its claim is based on an erroneous example obtained by multiplication of three concrete totally positive 4 × 4 upper-triangular matrices, but the order of multiplication of matrices used to produce this example was not the correct one. Below we present a right statement which claims the opposite to that of Theorem B. Its proof can be essentially found in a recent paper [N. Arkani-Hamed, T. Lam and M. Spradlin, Non- perturbative geometries for planar N = 4 SYM amplitudes, J. High Energy Phys. 2021 (2021) 65].
more »
« less
- Award ID(s):
- 2100791
- PAR ID:
- 10630775
- Publisher / Repository:
- World Scientific Publishing Company
- Date Published:
- Journal Name:
- International journal of mathematics
- ISSN:
- 1793-6519
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In studying the “11/8-Conjecture” on the Geography Problem in 4-dimensional topology, Furuta proposed a question on the existence of Pin ( 2 ) \operatorname {Pin}(2) -equivariant stable maps between certain representation spheres. A precise answer of Furuta’s problem was later conjectured by Jones. In this paper, we completely resolve Jones conjecture by analyzing the Pin ( 2 ) \operatorname {Pin}(2) -equivariant Mahowald invariants. As a geometric application of our result, we prove a “10/8+4”-Theorem. We prove our theorem by analyzing maps between certain finite spectra arising from B Pin ( 2 ) B\operatorname {Pin}(2) and various Thom spectra associated with it. To analyze these maps, we use the technique of cell diagrams, known results on the stable homotopy groups of spheres, and the j j -based Atiyah–Hirzebruch spectral sequence.more » « less
-
null (Ed.)Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an $$n \times n$$ linear system $Ax=b$ is $$\tilde{O}(n^\omega)$$, where $$\omega < 2.372864$$ is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly$(n)$ condition number. In this paper, we present an algorithm that solves linear systems in sparse matrices asymptotically faster than matrix multiplication for any $$\omega > 2$$. This speedup holds for any input matrix $$A$$ with $$o(n^{\omega -1}/\log(\kappa(A)))$$ non-zeros, where $$\kappa(A)$$ is the condition number of $$A$$. For poly$(n)$-conditioned matrices with $$\tilde{O}(n)$$ nonzeros, and the current value of $$\omega$$, the bit complexity of our algorithm to solve to within any $$1/\text{poly}(n)$$ error is $$O(n^{2.331645})$$. Our algorithm can be viewed as an efficient, randomized implementation of the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algorithm of [Eberly et al. ISSAC `06 `07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices.more » « less
-
null (Ed.)Abstract The classical Aronszajn–Donoghue theorem states that for a rank-one perturbation of a self-adjoint operator (by a cyclic vector) the singular parts of the spectral measures of the original and perturbed operators are mutually singular. As simple direct sum type examples show, this result does not hold for finite rank perturbations. However, the set of exceptional perturbations is pretty small. Namely, for a family of rank $$d$$ perturbations $$A_{\boldsymbol{\alpha }}:= A + {\textbf{B}} {\boldsymbol{\alpha }} {\textbf{B}}^*$$, $${\textbf{B}}:{\mathbb C}^d\to{{\mathcal{H}}}$$, with $${\operatorname{Ran}}{\textbf{B}}$$ being cyclic for $$A$$, parametrized by $$d\times d$$ Hermitian matrices $${\boldsymbol{\alpha }}$$, the singular parts of the spectral measures of $$A$$ and $$A_{\boldsymbol{\alpha }}$$ are mutually singular for all $${\boldsymbol{\alpha }}$$ except for a small exceptional set $$E$$. It was shown earlier by the 1st two authors, see [4], that $$E$$ is a subset of measure zero of the space $$\textbf{H}(d)$$ of $$d\times d$$ Hermitian matrices. In this paper, we show that the set $$E$$ has small Hausdorff dimension, $$\dim E \le \dim \textbf{H}(d)-1 = d^2-1$$.more » « less
-
We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n^2/S) to compute matrix-vector product Ax for x ∈ {0,1}^n. We similarly prove that matrix multiplication for nxn binary matrices requires T=Ω(n^3/√S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum time-space tradeoff lower bounds for n× n Boolean (i.e. AND-OR) matrix multiplication from T=Ω(n^2.5/S^0.5) to T=Ω(n^2.5/S^0.25) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms.more » « less
An official website of the United States government

