skip to main content


Search for: All records

Editors contains: "Tauman_Kalai, Yael"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Tauman_Kalai, Yael (Ed.)
    A folklore conjecture in quantum computing is that the acceptance probability of a quantum query algorithm can be approximated by a classical decision tree, with only a polynomial increase in the number of queries. Motivated by this conjecture, Aaronson and Ambainis (Theory of Computing, 2014) conjectured that this should hold more generally for any bounded function computed by a low degree polynomial. In this work we prove two new results towards establishing this conjecture: first, that any such polynomial has a small fractional certificate complexity; and second, that many inputs have a small sensitive block. We show that these would imply the Aaronson and Ambainis conjecture, assuming a conjectured extension of Talagrand’s concentration inequality. On the technical side, many classical techniques used in the analysis of Boolean functions seem to fail when applied to bounded functions. Here, we develop a new technique, based on a mix of combinatorics, analysis and geometry, and which in part extends a recent technique of Knop et al. (STOC 2021) to bounded functions. 
    more » « less
  2. Tauman_Kalai, Yael (Ed.)
    We study the complexity of computing stationary Nash equilibrium (NE) in n-player infinite-horizon general-sum stochastic games. We focus on the problem of computing NE in such stochastic games when each player is restricted to choosing a stationary policy and rewards are discounted. First, we prove that computing such NE is in PPAD (in addition to clearly being PPAD-hard). Second, we consider turn-based specializations of such games where at each state there is at most a single player that can take actions and show that these (seemingly-simpler) games remain PPAD-hard. Third, we show that under further structural assumptions on the rewards computing NE in such turn-based games is possible in polynomial time. Towards achieving these results we establish structural facts about stochastic games of broader utility, including monotonicity of utilities under single-state single-action changes and reductions to settings where each player controls a single state. 
    more » « less
  3. Tauman_Kalai, Yael (Ed.)
    We study a setting where Bayesian agents with a common prior have private information related to an event’s outcome and sequentially make public announcements relating to their information. Our main result shows that when agents' private information is independent conditioning on the event’s outcome whenever agents have similar beliefs about the outcome, their information is aggregated. That is, there is no false consensus. Our main result has a short proof based on a natural information-theoretic framework. A key ingredient of the framework is the equivalence between the sign of the "interaction information" and a super/sub-additive property of the value of people’s information. This provides an intuitive interpretation and an interesting application of the interaction information, which measures the amount of information shared by three random variables. We illustrate the power of this information-theoretic framework by reproving two additional results within it: 1) that agents quickly agree when announcing (summaries of) beliefs in round-robin fashion [Aaronson 2005], and 2) results from [Chen et al 2010] on when prediction market agents should release information to maximize their payment. We also interpret the information-theoretic framework and the above results in prediction markets by proving that the expected reward of revealing information is the conditional mutual information of the information revealed. 
    more » « less
  4. Tauman_Kalai, Yael (Ed.)
    Consider an agent exploring an unknown graph in search of some goal state. As it walks around the graph, it learns the nodes and their neighbors. The agent only knows where the goal state is when it reaches it. How do we reach this goal while moving only a small distance? This problem seems hopeless, even on trees of bounded degree, unless we give the agent some help. This setting with "help" often arises in exploring large search spaces (e.g., huge game trees) where we assume access to some score/quality function for each node, which we use to guide us towards the goal. In our case, we assume the help comes in the form of distance predictions: each node v provides a prediction f(v) of its distance to the goal vertex. Naturally if these predictions are correct, we can reach the goal along a shortest path. What if the predictions are unreliable and some of them are erroneous? Can we get an algorithm whose performance relates to the error of the predictions? In this work, we consider the problem on trees and give deterministic algorithms whose total movement cost is only O(OPT + Δ ⋅ ERR), where OPT is the distance from the start to the goal vertex, Δ the maximum degree, and the ERR is the total number of vertices whose predictions are erroneous. We show this guarantee is optimal. We then consider a "planning" version of the problem where the graph and predictions are known at the beginning, so the agent can use this global information to devise a search strategy of low cost. For this planning version, we go beyond trees and give an algorithms which gets good performance on (weighted) graphs with bounded doubling dimension. 
    more » « less
  5. Tauman_Kalai, Yael (Ed.)
    Connections between proof complexity and circuit complexity have become major tools for obtaining lower bounds in both areas. These connections - which take the form of interpolation theorems and query-to-communication lifting theorems - translate efficient proofs into small circuits, and vice versa, allowing tools from one area to be applied to the other. Recently, the theory of TFNP has emerged as a unifying framework underlying these connections. For many of the proof systems which admit such a connection there is a TFNP problem which characterizes it: the class of problems which are reducible to this TFNP problem via query-efficient reductions is equivalent to the tautologies that can be efficiently proven in the system. Through this, proof complexity has become a major tool for proving separations in black-box TFNP. Similarly, for certain monotone circuit models, the class of functions that it can compute efficiently is equivalent to what can be reduced to a certain TFNP problem in a communication-efficient manner. When a TFNP problem has both a proof and circuit characterization, one can prove an interpolation theorem. Conversely, many lifting theorems can be viewed as relating the communication and query reductions to TFNP problems. This is exciting, as it suggests that TFNP provides a roadmap for the development of further interpolation theorems and lifting theorems. In this paper we begin to develop a more systematic understanding of when these connections to TFNP occur. We give exact conditions under which a proof system or circuit model admits a characterization by a TFNP problem. We show: - Every well-behaved proof system which can prove its own soundness (a reflection principle) is characterized by a TFNP problem. Conversely, every TFNP problem gives rise to a well-behaved proof system which proves its own soundness. - Every well-behaved monotone circuit model which admits a universal family of functions is characterized by a TFNP problem. Conversely, every TFNP problem gives rise to a well-behaved monotone circuit model with a universal problem. As an example, we provide a TFNP characterization of the Polynomial Calculus, answering a question from [Mika Göös et al., 2022], and show that it can prove its own soundness. 
    more » « less
  6. Tauman_Kalai, Yael (Ed.)
    We show improved monotonicity testers for the Boolean hypercube under the p-biased measure, as well as over the hypergrid [m]ⁿ. Our results are: 1) For any p ∈ (0,1), for the p-biased hypercube we show a non-adaptive tester that makes Õ(√n/ε²) queries, accepts monotone functions with probability 1 and rejects functions that are ε-far from monotone with probability at least 2/3. 2) For all m ∈ ℕ, we show an Õ(√nm³/ε²) query monotonicity tester over [m]ⁿ. We also establish corresponding directed isoperimetric inequalities in these domains, analogous to the isoperimetric inequality in [Subhash Khot et al., 2018]. Previously, the best known tester due to Black, Chakrabarty and Seshadhri [Hadley Black et al., 2018] had Ω(n^{5/6}) query complexity. Our results are optimal up to poly-logarithmic factors and the dependency on m. Our proof uses a notion of monotone embeddings of measures into the Boolean hypercube that can be used to reduce the problem of monotonicity testing over an arbitrary product domains to the Boolean cube. The embedding maps a function over a product domain of dimension n into a function over a Boolean cube of a larger dimension n', while preserving its distance from being monotone; an embedding is considered efficient if n' is not much larger than n, and we show how to construct efficient embeddings in the above mentioned settings. 
    more » « less
  7. Tauman_Kalai, Yael (Ed.)
    In the weighted load balancing problem, the input is an n-vertex bipartite graph between a set of clients and a set of servers, and each client comes with some nonnegative real weight. The output is an assignment that maps each client to one of its adjacent servers, and the load of a server is then the sum of the weights of the clients assigned to it. The goal is to find an assignment that is well-balanced, typically captured by (approximately) minimizing either the 𝓁_∞- or 𝓁₂-norm of the server loads. Generalizing both of these objectives, the all-norm load balancing problem asks for an assignment that approximately minimizes all 𝓁_p-norm objectives for p ≥ 1, including p = ∞, simultaneously. Our main result is a deterministic O(log n)-pass O(1)-approximation semi-streaming algorithm for the all-norm load balancing problem. Prior to our work, only an O(log n)-pass O(log n)-approximation algorithm for the 𝓁_∞-norm objective was known in the semi-streaming setting. Our algorithm uses a novel application of the multiplicative weights update method to a mixed covering/packing convex program for the all-norm load balancing problem involving an infinite number of constraints. 
    more » « less