skip to main content


Search for: All records

Editors contains: "Ta-Shma, Amnon"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Ta-Shma, Amnon (Ed.)
    A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM. 
    more » « less
  2. Ta-Shma, Amnon (Ed.)
    Three-player Number On the Forehead communication may be thought of as a three-player Number In the Hand promise model, in which each player is given the inputs that are supposedly on the other two players' heads, and promised that they are consistent with the inputs of the other players. The set of all allowed inputs under this promise may be thought of as an order-3 tensor. We surprisingly observe that this tensor is exactly the matrix multiplication tensor, which is widely studied in the design of fast matrix multiplication algorithms. Using this connection, we prove a number of results about both Number On the Forehead communication and matrix multiplication, each by using known results or techniques about the other. For example, we show how the Laser method, a key technique used to design the best matrix multiplication algorithms, can also be used to design communication protocols for a variety of problems. We also show how known lower bounds for Number On the Forehead communication can be used to bound properties of the matrix multiplication tensor such as its zeroing out subrank. Finally, we substantially generalize known methods based on slice-rank for studying communication, and show how they directly relate to the matrix multiplication exponent ω. 
    more » « less
  3. Ta-Shma, Amnon (Ed.)
    The Tensor Isomorphism problem (TI) has recently emerged as having connections to multiple areas of research within complexity and beyond, but the current best upper bound is essentially the brute force algorithm. Being an algebraic problem, TI (or rather, proving that two tensors are non-isomorphic) lends itself very naturally to algebraic and semi-algebraic proof systems, such as the Polynomial Calculus (PC) and Sum of Squares (SoS). For its combinatorial cousin Graph Isomorphism, essentially optimal lower bounds are known for approaches based on PC and SoS (Berkholz & Grohe, SODA '17). Our main results are an Ω(n) lower bound on PC degree or SoS degree for Tensor Isomorphism, and a nontrivial upper bound for testing isomorphism of tensors of bounded rank. We also show that PC cannot perform basic linear algebra in sub-linear degree, such as comparing the rank of two matrices (which is essentially the same as 2-TI), or deriving BA=I from AB=I. As linear algebra is a key tool for understanding tensors, we introduce a strictly stronger proof system, PC-Inv, which allows as derivation rules all substitution instances of the implication AB=I → BA=I. We conjecture that even PC-Inv cannot solve TI in polynomial time either, but leave open getting lower bounds on PC-Inv for any system of equations, let alone those for TI. We also highlight many other open questions about proof complexity approaches to TI. 
    more » « less
  4. Ta-Shma, Amnon (Ed.)
    We study the fundamental challenge of exhibiting explicit functions that have small correlation with low-degree polynomials over 𝔽₂. Our main contributions include: 1) In STOC 2020, CHHLZ introduced a new technique to prove correlation bounds. Using their technique they established new correlation bounds for low-degree polynomials. They conjectured that their technique generalizes to higher degree polynomials as well. We give a counterexample to their conjecture, in fact ruling out weaker parameters and showing what they prove is essentially the best possible. 2) We propose a new approach for proving correlation bounds with the central "mod functions," consisting of two steps: (I) the polynomials that maximize correlation are symmetric and (II) symmetric polynomials have small correlation. Contrary to related results in the literature, we conjecture that (I) is true. We argue this approach is not affected by existing "barrier results." 3) We prove our conjecture for quadratic polynomials. Specifically, we determine the maximum possible correlation between quadratic polynomials modulo 2 and the functions (x_1,… ,x_n) → z^{∑ x_i} for any z on the complex unit circle, and show that it is achieved by symmetric polynomials. To obtain our results we develop a new proof technique: we express correlation in terms of directional derivatives and analyze it by slowly restricting the direction. 4) We make partial progress on the conjecture for cubic polynomials, in particular proving tight correlation bounds for cubic polynomials whose degree-3 part is symmetric. 
    more » « less
  5. Ta-Shma, Amnon (Ed.)
    Suppose that a target distribution can be approximately sampled by a low-depth decision tree, or more generally by an efficient cell-probe algorithm. It is shown to be possible to restrict the input to the sampler so that its output distribution is still not too far from the target distribution, and at the same time many output coordinates are almost pairwise independent. This new tool is then used to obtain several new sampling lower bounds and separations, including a separation between AC0 and low-depth decision trees, and a hierarchy theorem for sampling. It is also used to obtain a new proof of the Patrascu-Viola data-structure lower bound for Rank, thereby unifying sampling and data-structure lower bounds. 
    more » « less
  6. Ta-Shma, Amnon (Ed.)
    We initiate the study of generalized AC⁰ circuits comprised of arbitrary unbounded fan-in gates which only need to be constant over inputs of Hamming weight ≥ k (up to negations of the input bits), which we denote GC⁰(k). The gate set of this class includes biased LTFs like the k-OR (outputs 1 iff ≥ k bits are 1) and k-AND (outputs 0 iff ≥ k bits are 0), and thus can be seen as an interpolation between AC⁰ and TC⁰. We establish a tight multi-switching lemma for GC⁰(k) circuits, which bounds the probability that several depth-2 GC⁰(k) circuits do not simultaneously simplify under a random restriction. We also establish a new depth reduction lemma such that coupled with our multi-switching lemma, we can show many results obtained from the multi-switching lemma for depth-d size-s AC⁰ circuits lifts to depth-d size-s^{.99} GC⁰(.01 log s) circuits with no loss in parameters (other than hidden constants). Our result has the following applications: - Size-2^Ω(n^{1/d}) depth-d GC⁰(Ω(n^{1/d})) circuits do not correlate with parity (extending a result of Håstad (SICOMP, 2014)). - Size-n^Ω(log n) GC⁰(Ω(log² n)) circuits with n^{.249} arbitrary threshold gates or n^{.499} arbitrary symmetric gates exhibit exponentially small correlation against an explicit function (extending a result of Tan and Servedio (RANDOM, 2019)). - There is a seed length O((log m)^{d-1}log(m/ε)log log(m)) pseudorandom generator against size-m depth-d GC⁰(log m) circuits, matching the AC⁰ lower bound of Håstad up to a log log m factor (extending a result of Lyu (CCC, 2022)). - Size-m GC⁰(log m) circuits have exponentially small Fourier tails (extending a result of Tal (CCC, 2017)). 
    more » « less
  7. Ta-Shma, Amnon (Ed.)
    Locally Decodable Codes (LDCs) are error-correcting codes C:Σⁿ → Σ^m, encoding messages in Σⁿ to codewords in Σ^m, with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (SICOMP 2006) show how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Standard LDCs for Insdel errors were first studied by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), and are further motivated by recent advances in DNA random access bio-technologies. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries (even adaptively), over the binary alphabet. This answers a question explicitly raised by Gur and Lachish (SICOMP 2021) and is the first exponential lower bound for RLDCs. Combined with the results of Ben-Sasson et al., our result exhibits a "phase-transition"-type behavior on the codeword length for some constant-query complexity. We achieve these lower bounds via a transformation of RLDCs to standard Hamming LDCs, using a careful analysis of restrictions of message bits that fix codeword bits. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with almost linear codeword length and constant query complexity, matching the parameters of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs. 
    more » « less
  8. Ta-Shma, Amnon (Ed.)
    In a recent work, Gryaznov, Pudlák and Talebanfard (CCC '22) introduced a linear variant of read-once branching programs, with motivations from circuit and proof complexity. Such a read-once linear branching program is a branching program where each node is allowed to make 𝔽₂-linear queries, and is read-once in the sense that the queries on each path is linearly independent. As their main result, they constructed an explicit function with average-case complexity 2^{n/3-o(n)} against a slightly restricted model, which they call strongly read-once linear branching programs. The main tool in their lower bound result is a new type of extractor, called directional affine extractors, that they introduced. Our main result is an explicit function with 2^{n-o(n)} average-case complexity against the strongly read-once linear branching program model, which is almost optimal. This result is based on a new connection from this problem to sumset extractors, which is a randomness extractor model introduced by Chattopadhyay and Li (STOC '16) as a generalization of many other well-studied models including two-source extractors, affine extractors and small-space extractors. With this new connection, our lower bound naturally follows from a recent construction of sumset extractors by Chattopadhyay and Liao (STOC '22). In addition, we show that directional affine extractors imply sumset extractors in a restricted setting. We observe that such restricted sumset sources are enough to derive lower bounds, and obtain an arguably more modular proof of the lower bound by Gryaznov, Pudlák and Talebanfard. We also initiate a study of pseudorandomness against linear branching programs. Our main result here is a hitting set generator construction against regular linear branching programs with constant width. We derive this result based on a connection to Kakeya sets over finite fields. 
    more » « less
  9. Ta-Shma, Amnon (Ed.)
    We study the problem of finding a Tarski fixed point over the k-dimensional grid [n]^k. We give a black-box reduction from the Tarski problem to the same problem with an additional promise that the input function has a unique fixed point. It implies that the Tarski problem and the unique Tarski problem have exactly the same query complexity. Our reduction is based on a novel notion of partial-information functions which we use to fool algorithms for the unique Tarski problem as if they were working on a monotone function with a unique fixed point. 
    more » « less
  10. Ta-Shma, Amnon (Ed.)
    A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated "hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which e.g., prBPP = prP (so-called "high-end derandomization), or prBPP ⊆ prSUBEXP (so-called "low-end derandomization), and more generally, under which prBPP ⊆ prDTIME(𝒞) where 𝒞 is a "nice" class (closed under composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization. In this work, following the recent work by Chen and Tell (FOCS’21) that considers "almost-all-input" hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider "almost-all-input" leakage-resilient hardness of a function f - that is, hardness of computing f(x) even given, say, √|x| bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization), both in the high-end and in the low-end setting. In more detail, we show that there exists a constant c such that for every function T, the following are equivalent: - prBPP ⊆ prDTIME(poly(T(poly(n)))); - Existence of a poly(T(poly(n)))-time computable function f :{0,1}ⁿ → {0,1}ⁿ that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms. As far as we know, this is the first assumption that characterizes derandomization in both the low-end and the high-end regime. Additionally, our characterization naturally extends also to derandomization of prMA, and also to average-case derandomization, by appropriately weakening the requirements on the function f. In particular, for the case of average-case (a.k.a. "effective") derandomization, we no longer require the function to be almost-all-input hard, but simply satisfy the more standard notion of average-case leakage-resilient hardness (w.r.t., every samplable distribution), whereas for derandomization of prMA, we instead consider leakage-resilience for relations. 
    more » « less