Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Guruswami, Venkatesan (Ed.)A fundamental shortcoming of the concept of Nash equilibrium is its computational intractability: approximating Nash equilibria in normal-form games is PPAD-hard. In this paper, inspired by the ideas of smoothed analysis, we introduce a relaxed variant of Nash equilibrium called σ-smooth Nash equilibrium, for a {smoothness parameter} σ. In a σ-smooth Nash equilibrium, players only need to achieve utility at least as high as their best deviation to a σ-smooth strategy, which is a distribution that does not put too much mass (as parametrized by σ) on any fixed action. We distinguish two variants of σ-smooth Nash equilibria: strong σ-smooth Nash equilibria, in which players are required to play σ-smooth strategies under equilibrium play, and weak σ-smooth Nash equilibria, where there is no such requirement. We show that both weak and strong σ-smooth Nash equilibria have superior computational properties to Nash equilibria: when σ as well as an approximation parameter ϵ and the number of players are all constants, there is a {constant-time} randomized algorithm to find a weak ϵ-approximate σ-smooth Nash equilibrium in normal-form games. In the same parameter regime, there is a polynomial-time deterministic algorithm to find a strong ϵ-approximate σ-smooth Nash equilibrium in a normal-form game. These results stand in contrast to the optimal algorithm for computing ϵ-approximate Nash equilibria, which cannot run in faster than quasipolynomial-time, subject to complexity-theoretic assumptions. We complement our upper bounds by showing that when either σ or ϵ is an inverse polynomial, finding a weak ϵ-approximate σ-smooth Nash equilibria becomes computationally intractable. Our results are the first to propose a variant of Nash equilibrium which is computationally tractable, allows players to act independently, and which, as we discuss, is justified by an extensive line of work on individual choice behavior in the economics literature.more » « less
-
Guruswami, Venkatesan (Ed.)We study a communication game between a sender and receiver. The sender chooses one of her signals about the state of the world (i.e., an anecdote) and communicates it to the receiver who takes an action affecting both players. The sender and receiver both care about the state of the world but are also influenced by personal preferences, so their ideal actions can differ. We characterize perfect Bayesian equilibria. The sender faces a temptation to persuade: she wants to select a biased anecdote to influence the receiver’s action. Anecdotes are still informative to the receiver (who will debias at equilibrium) but the attempt to persuade comes at the cost of precision. This gives rise to informational homophily where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. Communication becomes polarized when the sender is an expert with access to many signals, with the sender choosing extreme outlier anecdotes at equilibrium (unless preferences are perfectly aligned). This polarization dissipates all the gains from communication with an increasingly well-informed sender when the anecdote distribution is heavy-tailed. Experts therefore face a curse of informedness: receivers will prefer to listen to less-informed senders who cannot pick biased signals as easily.more » « less
-
Guruswami, Venkatesan (Ed.)Gameplay under various forms of uncertainty has been widely studied. Feldman et al. [Michal Feldman et al., 2010] studied a particularly low-information setting in which one observes the opponent’s actions but no payoffs, not even one’s own, and introduced an algorithm which guarantees one’s payoff nonetheless approaches the minimax optimal value (i.e., zero) in a symmetric zero-sum game. Against an opponent playing a minimax-optimal strategy, approaching the value of the game is the best one can hope to guarantee. However, a wealth of research in behavioral economics shows that people often do not make perfectly rational, optimal decisions. Here we consider whether it is possible to actually win in this setting if the opponent is behaviorally biased. We model several deterministic, biased opponents and show that even without knowing the game matrix in advance or observing any payoffs, it is possible to take advantage of each bias in order to win nearly every round (so long as the game has the property that each action beats and is beaten by at least one other action). We also provide a partial characterization of the kinds of biased strategies that can be exploited to win nearly every round, and provide algorithms for beating some kinds of biased strategies even when we don't know which strategy the opponent uses.more » « less
-
Guruswami, Venkatesan (Ed.)Algorithms with predictions is a new research direction that leverages machine learned predictions for algorithm design. So far a plethora of recent works have incorporated predictions to improve on worst-case bounds for online problems. In this paper, we initiate the study of complexity of dynamic data structures with predictions, including dynamic graph algorithms. Unlike online algorithms, the goal in dynamic data structures is to maintain the solution efficiently with every update. We investigate three natural models of prediction: (1) δ-accurate predictions where each predicted request matches the true request with probability δ, (2) list-accurate predictions where a true request comes from a list of possible requests, and (3) bounded delay predictions where the true requests are a permutation of the predicted requests. We give general reductions among the prediction models, showing that bounded delay is the strongest prediction model, followed by list-accurate, and δ-accurate. Further, we identify two broad problem classes based on lower bounds due to the Online Matrix Vector (OMv) conjecture. Specifically, we show that locally correctable dynamic problems have strong conditional lower bounds for list-accurate predictions that are equivalent to the non-prediction setting, unless list-accurate predictions are perfect. Moreover, we show that locally reducible dynamic problems have time complexity that degrades gracefully with the quality of bounded delay predictions. We categorize problems with known OMv lower bounds accordingly and give several upper bounds in the delay model that show that our lower bounds are almost tight. We note that concurrent work by v.d.Brand et al. [SODA '24] and Liu and Srinivas [arXiv:2307.08890] independently study dynamic graph algorithms with predictions, but their work is mostly focused on showing upper bounds.more » « less
-
Guruswami, Venkatesan (Ed.)In decentralized finance ("DeFi"), automated market makers (AMMs) enable traders to programmatically exchange one asset for another. Such trades are enabled by the assets deposited by liquidity providers (LPs). The goal of this paper is to characterize and interpret the optimal (i.e., profit-maximizing) strategy of a monopolist liquidity provider, as a function of that LP’s beliefs about asset prices and trader behavior. We introduce a general framework for reasoning about AMMs based on a Bayesian-like belief inference framework, where LPs maintain an asset price estimate, which is updated by incorporating traders' price estimates. In this model, the market maker (i.e., LP) chooses a demand curve that specifies the quantity of a risky asset to be held at each dollar price. Traders arrive sequentially and submit a price bid that can be interpreted as their estimate of the risky asset price; the AMM responds to this submitted bid with an allocation of the risky asset to the trader, a payment that the trader must pay, and a revised internal estimate for the true asset price. We define an incentive-compatible (IC) AMM as one in which a trader’s optimal strategy is to submit its true estimate of the asset price, and characterize the IC AMMs as those with downward-sloping demand curves and payments defined by a formula familiar from Myerson’s optimal auction theory. We generalize Myerson’s virtual values, and characterize the profit-maximizing IC AMM. The optimal demand curve generally has a jump that can be interpreted as a "bid-ask spread," which we show is caused by a combination of adverse selection risk (dominant when the degree of information asymmetry is large) and monopoly pricing (dominant when asymmetry is small). This work opens up new research directions into the study of automated exchange mechanisms from the lens of optimal auction theory and iterative belief inference, using tools of theoretical computer science in a novel way.more » « less
-
Guruswami, Venkatesan (Ed.)Recent efforts in Analysis of Boolean Functions aim to extend core results to new spaces, including to the slice binom([n],k), the hypergrid [K]ⁿ, and noncommutative spaces (matrix algebras). We present here a new way to relate functions on the hypergrid (or products of cyclic groups) to their harmonic extensions over the polytorus. We show the supremum of a function f over products of the cyclic group {exp(2π i k/K)}_{k = 1}^K controls the supremum of f over the entire polytorus ({z ∈ ℂ:|z| = 1}ⁿ), with multiplicative constant C depending on K and deg(f) only. This Remez-type inequality appears to be the first such estimate that is dimension-free (i.e., C does not depend on n). This dimension-free Remez-type inequality removes the main technical barrier to giving 𝒪(log n) sample complexity, polytime algorithms for learning low-degree polynomials on the hypergrid and low-degree observables on level-K qudit systems. In particular, our dimension-free Remez inequality implies new Bohnenblust-Hille-type estimates which are central to the learning algorithms and appear unobtainable via standard techniques. Thus we extend to new spaces a recent line of work [Eskenazis and Ivanisvili, 2022; Huang et al., 2022; Volberg and Zhang, 2023] that gave similarly efficient methods for learning low-degree polynomials on the hypercube and observables on qubits. An additional product of these efforts is a new class of distributions over which arbitrary quantum observables are well-approximated by their low-degree truncations - a phenomenon that greatly extends the reach of low-degree learning in quantum science [Huang et al., 2022].more » « less
-
Guruswami, Venkatesan (Ed.)We study two recent combinatorial contract design models, which highlight different sources of complexity that may arise in contract design, where a principal delegates the execution of a costly project to others. In both settings, the principal cannot observe the choices of the agent(s), only the project’s outcome (success or failure), and incentivizes the agent(s) using a contract, a payment scheme that specifies the payment to the agent(s) upon a project’s success. We present results that resolve open problems and advance our understanding of the computational complexity of both settings. In the multi-agent setting, the project is delegated to a team of agents, where each agent chooses whether or not to exert effort. A success probability function maps any subset of agents who exert effort to a probability of the project’s success. For the family of submodular success probability functions, Dütting et al. [2023] established a poly-time constant factor approximation to the optimal contract, and left open whether this problem admits a PTAS. We answer this question on the negative, by showing that no poly-time algorithm guarantees a better than 0.7-approximation to the optimal contract. For XOS functions, they give a poly-time constant approximation with value and demand queries. We show that with value queries only, one cannot get any constant approximation. In the multi-action setting, the project is delegated to a single agent, who can take any subset of a given set of actions. Here, a success probability function maps any subset of actions to a probability of the project’s success. Dütting et al. [2021a] showed a poly-time algorithm for computing an optimal contract for gross substitutes success probability functions, and showed that the problem is NP-hard for submodular functions. We further strengthen this hardness result by showing that this problem does not admit any constant factor approximation. Furthermore, for the broader class of XOS functions, we establish the hardness of obtaining a n^{-1/2+ε}-approximation for any ε > 0.more » « less
-
Guruswami, Venkatesan (Ed.)Generalizing work of Künnemann, Paturi, and Schneider [ICALP 2017], we study a wide class of high-dimensional dynamic programming (DP) problems in which one must find the shortest path between two points in a high-dimensional grid given a tensor of transition costs between nodes in the grid. This captures many classical problems which are solved using DP such as the knapsack problem, the airplane refueling problem, and the minimal-weight polygon triangulation problem. We observe that for many of these problems, the tensor naturally has low tensor rank or low slice rank. We then give new algorithms and a web of fine-grained reductions to tightly determine the complexity of these problems. For instance, we show that a polynomial speedup over the DP algorithm is possible when the tensor rank is a constant or the slice rank is 1, but that such a speedup is impossible if the tensor rank is slightly super-constant (assuming SETH) or the slice rank is at least 3 (assuming the APSP conjecture). We find that this characterizes the known complexities for many of these problems, and in some cases leads to new faster algorithms.more » « less
-
Guruswami, Venkatesan (Ed.)Relational problems (those with many possible valid outputs) are different from decision problems, but it is easy to forget just how different. This paper initiates the study of FBQP/qpoly, the class of relational problems solvable in quantum polynomial-time with the help of polynomial-sized quantum advice, along with its analogues for deterministic and randomized computation (FP, FBPP) and advice (/poly, /rpoly). Our first result is that FBQP/qpoly ≠ FBQP/poly, unconditionally, with no oracle - a striking contrast with what we know about the analogous decision classes. The proof repurposes the separation between quantum and classical one-way communication complexities due to Bar-Yossef, Jayram, and Kerenidis. We discuss how this separation raises the prospect of near-term experiments to demonstrate "quantum information supremacy," a form of quantum supremacy that would not depend on unproved complexity assumptions. Our second result is that FBPP ̸ ⊂ FP/poly - that is, Adleman’s Theorem fails for relational problems - unless PSPACE ⊂ NP/poly. Our proof uses IP = PSPACE and time-bounded Kolmogorov complexity. On the other hand, we show that proving FBPP ̸ ⊂ FP/poly will be hard, as it implies a superpolynomial circuit lower bound for PromiseBPEXP. We prove the following further results: - Unconditionally, FP ≠ FBPP and FP/poly ≠ FBPP/poly (even when these classes are carefully defined). - FBPP/poly = FBPP/rpoly (and likewise for FBQP). For sampling problems, by contrast, SampBPP/poly ≠ SampBPP/rpoly (and likewise for SampBQP).more » « less