skip to main content

Title: Reconstruction algorithms for low-rank tensors and depth-3 multilinear circuits
We give new and efficient black-box reconstruction algorithms for some classes of depth-3 arithmetic circuits. As a consequence, we obtain the first efficient algorithm for computing the tensor rank and for finding the optimal tensor decomposition as a sum of rank-one tensors when then input is a constant-rank tensor. More specifically, we provide efficient learning algorithms that run in randomized polynomial time over general fields and in deterministic polynomial time over and for the following classes: 1) Set-multilinear depth-3 circuits of constant top fan-in ((k) circuits). As a consequence of our algorithm, we obtain the first polynomial time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank tensors. This result holds for d dimensional tensors for any d, but is interesting even for d=3. 2) Sums of powers of constantly many linear forms ((k) circuits). As a consequence we obtain the first polynomial-time algorithm for tensor rank computation and optimal tensor decomposition of constant-rank symmetric tensors. 3) Multilinear depth-3 circuits of constant top fan-in (multilinear (k) circuits). Our algorithm works over all fields of characteristic 0 or large enough characteristic. Prior to our work the only efficient algorithms known were over polynomially-sized finite fields (see. Karnin-Shpilka 09’). Prior more » to our work, the only polynomial-time or even subexponential-time algorithms known (deterministic or randomized) for subclasses of (k) circuits that also work over large/infinite fields were for the setting when the top fan-in k is at most 2 (see Sinha 16’ and Sinha 20’). « less
Authors:
; ;
Award ID(s):
1909683
Publication Date:
NSF-PAR ID:
10292839
Journal Name:
STOC 2021: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing
Page Range or eLocation-ID:
809 to 822
Sponsoring Org:
National Science Foundation
More Like this
  1. Byrka, Jaroslaw ; Meka, Raghu (Ed.)
    In this work, we prove new relations between the bias of multilinear forms, the correlation between multilinear forms and lower degree polynomials, and the rank of tensors over F₂. We show the following results for multilinear forms and tensors. Correlation bounds. We show that a random d-linear form has exponentially low correlation with low-degree polynomials. More precisely, for d = 2^{o(k)}, we show that a random d-linear form f(X₁,X₂, … , X_d) : (F₂^{k}) ^d → F₂ has correlation 2^{-k(1-o(1))} with any polynomial of degree at most d/2 with high probability. This result is proved by giving near-optimal bounds on the bias of a random d-linear form, which is in turn proved by giving near-optimal bounds on the probability that a sum of t random d-dimensional rank-1 tensors is identically zero. Tensor rank vs Bias. We show that if a 3-dimensional tensor has small rank then its bias, when viewed as a 3-linear form, is large. More precisely, given any 3-dimensional tensor T: [k]³ → F₂ of rank at most t, the bias of the 3-linear form f_T(X₁, X₂, X₃) : = ∑_{(i₁, i₂, i₃) ∈ [k]³} T(i₁, i₂, i₃)⋅ X_{1,i₁}⋅ X_{2,i₂}⋅ X_{3,i₃} is at least (3/4)^t. This bias vsmore »tensor-rank connection suggests a natural approach to proving nontrivial tensor-rank lower bounds. In particular, we use this approach to give a new proof that the finite field multiplication tensor has tensor rank at least 3.52 k, which is the best known rank lower bound for any explicit tensor in three dimensions over F₂. Moreover, this relation between bias and tensor rank holds for d-dimensional tensors for any fixed d.« less
  2. Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, amongmore »other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.« less
  3. We study the problem of maximizing a non-monotone submodular function subject to a cardinality constraint in the streaming model. Our main contributions are two single-pass (semi-)streaming algorithms that use $\tilde{O}(k)\cdot\mathrm{poly}(1/\varepsilon)$ memory, where $k$ is the size constraint. At the end of the stream, both our algorithms post-process their data structures using any offline algorithm for submodular maximization, and obtain a solution whose approximation guarantee is $\frac{\alpha}{1+\alpha}-\varepsilon$, where $\alpha$ is the approximation of the offline algorithm. If we use an exact (exponential time) post-processing algorithm, this leads to $\frac{1}{2}-\varepsilon$ approximation (which is nearly optimal). If we post-process with the algorithm of Buchbinder-Feldman '19, that achieves the state-of-the-art offline approximation guarantee of $\alpha=0.385$, we obtain $0.2779$-approximation in polynomial time, improving over the previously best polynomial-time approximation of $0.1715$ due to Feldman'18. One of our algorithms is combinatorial and enjoys fast update and overall running times. Our other algorithm is based on the multilinear extension, enjoys an improved space complexity, and can be made deterministic in some settings of interest.
  4. We study an online hypergraph matching problem with delays, motivated by ridesharing applications. In this model, users enter a marketplace sequentially, and are willing to wait up to $d$ timesteps to be matched, after which they will leave the system in favor of an outside option. A platform can match groups of up to $k$ users together, indicating that they will share a ride. Each group of users yields a match value depending on how compatible they are with one another. As an example, in ridesharing, $k$ is the capacity of the service vehicles, and $d$ is the amount of time a user is willing to wait for a driver to be matched to them. We present results for both the utility maximization and cost minimization variants of the problem. In the utility maximization setting, the optimal competitive ratio is $\frac{1}{d}$ whenever $k \geq 3$, and is achievable in polynomial-time for any fixed $k$. In the cost minimization variation, when $k = 2$, the optimal competitive ratio for deterministic algorithms is $\frac{3}{2}$ and is achieved by a polynomial-time thresholding algorithm. When $k>2$, we show that a polynomial-time randomized batching algorithm is $(2 - \frac{1}{d}) \log k$-competitive, and it is NP-hardmore »to achieve a competitive ratio better than $\log k - O (\log \log k)$.« less
  5. We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a lowtubal-rank tensor using the tensor singular value decomposition (t-SVD) algebraic framework. We show the t-SVD is a specialization of the well-studied block-term decomposition for third-order tensors, and we present an algorithm under this model that can track changing free submodules from incomplete streaming 2-D data. The proposed algorithm uses principles from incremental gradient descent on the Grassmann manifold of subspaces to solve the tensor completion problem with linear complexity and constant memory in the number of time samples. We provide a local expected linear convergence result for our algorithm. Our empirical results are competitive in accuracy but much faster in compute time than state-of-the-art tensor completion algorithms on real applications to recover temporal chemo-sensing and MRI data under limited sampling.