Determining the asymptotic algebraic complexity of matrix multiplication, succinctly represented by the matrix multiplication exponent omega, is a central problem in algebraic complexity theory. The best upper bounds on omega, leading to the state-of-the-art omega <= 2.37.., have been obtained via the laser method of Strassen and its generalization by Coppersmith and Winograd. Recent barrier results show limitations for these and related approaches to improve the upper bound on omega. We introduce a new and more general barrier, providing stronger limitations than in previous work. Concretely, we introduce the notion of "irreversibility" of a tensor and we prove (in some precise sense) that any approach that uses an irreversible tensor in an intermediate step (e.g., as a starting tensor in the laser method) cannot give omega = 2. In quantitative terms, we prove that the best upper bound achievable is lower bounded by two times the irreversibility of the intermediate tensor. The quantum functionals and Strassen support functionals give (so far, the best) lower bounds on irreversibility. We provide lower bounds on the irreversibility of key intermediate tensors, including the small and big Coppersmith - Winograd tensors, that improve limitations shown in previous work. Finally, we discuss barriers on the group-theoretic approach in terms of "monomial" irreversibility.
more »
« less
Barriers for Rectangular Matrix Multiplication
We study the algorithmic problem of multiplying large matrices that are rectangular. We prove that the method that has been used to construct the fastest algorithms for rectangular matrix multiplication cannot give optimal algorithms. In fact, we prove a precise numerical barrier for this method. Our barrier improves the previously known barriers, both in the numerical sense, as well as in its generality. We prove our result using the asymptotic spectrum of tensors. More precisely, we crucially make use of two families of real tensor parameters with special algebraic properties: the quantum functionals and the support functionals. In particular, we prove that any lower bound on the dual exponent of matrix multiplication α via the big Coppersmith–Winograd tensors cannot exceed 0.625.
more »
« less
- Award ID(s):
- 1900460
- NSF-PAR ID:
- 10169275
- Date Published:
- Journal Name:
- Electronic colloquium on computational complexity
- Volume:
- 30
- ISSN:
- 1433-8092
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The main contribution of this paper is a new improved variant of the laser method for designing matrix multiplication algorithms. Building upon the recent techniques of [Duan, Wu, Zhou, FOCS 2023], the new method introduces several new ingredients that not only yield an improved bound on the matrix multiplication exponent ω, but also improve the known bounds on rectangular matrix multiplication by [Le Gall and Urrutia, SODA 2018]. In particular, the new bound on ω is ω ≤ 2.371552 (improved from ω ≤ 2.371866). For the dual matrix multiplication exponent α defined as the largest α for which ω(1, α, 1) = 2, we obtain the improvement α ≥ 0.321334 (improved from α ≥ 0.31389). Similar improvements are obtained for various other exponents for multiplying rectangular matrices.more » « less
-
We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n^2/S) to compute matrix-vector product Ax for x ∈ {0,1}^n. We similarly prove that matrix multiplication for nxn binary matrices requires T=Ω(n^3/√S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum time-space tradeoff lower bounds for n× n Boolean (i.e. AND-OR) matrix multiplication from T=Ω(n^2.5/S^0.5) to T=Ω(n^2.5/S^0.25) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms.more » « less
-
Motivated by problems in algebraic complexity theory (e.g., matrix multiplication) and extremal combinatorics (e.g., the cap set problem and the sunflower problem), we introduce the geometric rank as a new tool in the study of tensors and hypergraphs. We prove that the geometric rank is an upper bound on the subrank of tensors and the independence number of hypergraphs. We prove that the geometric rank is smaller than the slice rank of Tao, and relate geometric rank to the analytic rank of Gowers and Wolf in an asymptotic fashion. As a first application, we use geometric rank to prove a tight upper bound on the (border) subrank of the matrix multiplication tensors, matching Strassen's well-known lower bound from 1987.more » « less
-
In the light bulb problem, one is given uniformly random vectors x1,…,xn,y1,…,yn∈{−1,1}d. They are all chosen independently except a planted pair (xi∗,yj∗) is chosen with correlation ρ>0. The goal is to find the planted pair. This problem was introduced over 30 years ago by L.~Valiant, and is known to have many applications in data analysis, statistics, and learning theory. The naive algorithm runs in Ω(n2) time, and algorithms based on Locality-Sensitive Hashing approach quadratic time as ρ→0. In 2012, G.~Valiant gave a breakthrough algorithm using fast matrix multiplication that runs in time O(n(5−ω)/(4−ω))more » « less
0 is. This was subsequently refined by Karppa, Kaski, and Kohonen in 2016 to O(n2ω/3) 0. We also introduce a new tensor T2112, which has the same size of 2×2 matrix multiplication tensor, but runs faster than the Strassen's algorithm for light bulb problem.