skip to main content


Title: On the hardness of dominant strategy mechanism design
We study the communication complexity of dominant strategy implementations of combinatorial auctions. We start with two domains that are generally considered “easy”: multi-unit auctions with decreasing marginal values and combinatorial auctions with gross substitutes valuations. For both domains we have fast algorithms that find the welfare-maximizing allocation with communication complexity that is poly-logarithmic in the input size. This immediately implies that welfare maximization can be achieved in ex-post equilibrium with no significant communication cost, by using VCG payments. In contrast, we show that in both domains the communication complexity of any dominant strategy implementation that achieves the optimal welfare is polynomial in the input size. We then move on to studying the approximation ratios achievable by dominant strategy mechanisms. For multi-unit auctions with decreasing marginal values, we provide a dominant-strategy communication FPTAS. For combinatorial auctions with general valuations, we show that there is no dominant strategy mechanism that achieves an approximation ratio better than m1−є that uses poly(m,n) bits of communication, where m is the number of items and n is the number of bidders. In contrast, a randomized dominant strategy mechanism that achieves an O(√m) approximation with poly(m,n) communication is known. This proves the first gap between computationally efficient deterministic dominant strategy mechanisms and randomized ones. En route, we answer an open question on the communication cost of implementing dominant strategy mechanisms for more than two players, and also solve some open problems in the area of simultaneous combinatorial auctions.  more » « less
Award ID(s):
2127781
NSF-PAR ID:
10352728
Author(s) / Creator(s):
; ;
Editor(s):
Stefano Leonardi
Date Published:
Journal Name:
STOC 2022: Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing
Page Range / eLocation ID:
690 to 703
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the following communication problem: Alice and Bob each have some valuation functions $v_1(\cdot)$ and $v_2(\cdot)$ over subsets of $m$ items, and their goal is to partition the items into $S, \bar{S}$ in a way that maximizes the welfare, $v_1(S) + v_2(\bar{S})$. We study both the allocation problem, which asks for a welfare-maximizing partition and the decision problem, which asks whether or not there exists a partition guaranteeing certain welfare, for binary XOS valuations. For interactive protocols with $poly(m)$ communication, a tight 3/4-approximation is known for both [Fei06,DS06]. For interactive protocols, the allocation problem is provably harder than the decision problem: any solution to the allocation problem implies a solution to the decision problem with one additional round and $\log m$ additional bits of communication via a trivial reduction. Surprisingly, the allocation problem is provably easier for simultaneous protocols. Specifically, we show: 1) There exists a simultaneous, randomized protocol with polynomial communication that selects a partition whose expected welfare is at least $3/4$ of the optimum. This matches the guarantee of the best interactive, randomized protocol with polynomial communication. 2) For all $\varepsilon > 0$, any simultaneous, randomized protocol that decides whether the welfare of the optimal partition is $\geq 1$ or $\leq 3/4 - 1/108+\varepsilon$ correctly with probability $> 1/2 + 1/ poly(m)$ requires exponential communication. This provides a separation between the attainable approximation guarantees via interactive ($3/4$) versus simultaneous ($\leq 3/4-1/108$) protocols with polynomial communication. In other words, this trivial reduction from decision to allocation problems provably requires the extra round of communication. We further discuss the implications of our results for the design of truthful combinatorial auctions in general, and extensions to general XOS valuations. In particular, our protocol for the allocation problem implies a new style of truthful mechanisms. 
    more » « less
  2. We study combinatorial auctions with interdependent valuations, where each agent i has a private signal sithat captures her private information and the valuation function of every agent depends on the entire signal profile, [Formula: see text]. The literature in economics shows that the interdependent model gives rise to strong impossibility results and identifies assumptions under which optimal solutions can be attained. The computer science literature provides approximation results for simple single-parameter settings (mostly single-item auctions or matroid feasibility constraints). Both bodies of literature focus largely on valuations satisfying a technical condition termed single crossing (or variants thereof). We consider the class of submodular over signals (SOS) valuations (without imposing any single crossing-type assumption) and provide the first welfare approximation guarantees for multidimensional combinatorial auctions achieved by universally ex post incentive-compatible, individually rational mechanisms. Our main results are (i) four approximation for any single-parameter downward-closed setting with single-dimensional signals and SOS valuations; (ii) four approximation for any combinatorial auction with multidimensional signals and separable-SOS valuations; and (iii) (k + 3) and (2 log(k) + 4) approximation for any combinatorial auction with single-dimensional signals, with k-sized signal space, for SOS and strong-SOS valuations, respectively. All of our results extend to a parameterized version of SOS, d-approximate SOS, while losing a factor that depends on d.

    Funding: A. Eden was partially supported by NSF Award IIS-2007887, the European Research Council (ERC) under the European Union's Seventh Framework Programme [FP7/2007-2013]/ERC Grant Agreement 337122, by the Israel Science Foundation [Grant 317/17], and by an Amazon research award. M. Feldman received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program [Grant Agreement 866132], by the Israel Science Foundation [Grant 317/17], by an Amazon research award, and by the NSF-BSF [Grant 2020788]. The work of K. Goldner was supported partially by NSF awards DMS-1903037 and CNS-2228610 and a Shibulal Family Career Development Professorship. A. R. Karlin was supported by the NSF-CCF [Grant 1813135].

     
    more » « less
  3. Vidick, T. (Ed.)
    We study auctions for carbon licenses, a policy tool used to control the social cost of pollution. Each identical license grants the right to produce a unit of pollution. Each buyer (i.e., firm that pollutes during the manufacturing process) enjoys a decreasing marginal value for licenses, but society suffers an increasing marginal cost for each license distributed. The seller (i.e., the government) can choose a number of licenses to put up for auction, and wishes to maximize the societal welfare: the total economic value of the buyers minus the social cost. Motivated by emission license markets deployed in practice, we focus on uniform price auctions with a price floor and/or price ceiling. The seller has distributional information about the market, and their goal is to tune the auction parameters to maximize expected welfare. The target benchmark is the maximum expected welfare achievable by any such auction under truth-telling behavior. Unfortunately, the uniform price auction is not truthful, and strategic behavior can significantly reduce (even below zero) the welfare of a given auction configuration. We describe a subclass of “safe-price” auctions for which the welfare at any Bayes-Nash equilibrium will approximate the welfare under truth-telling behavior. We then show that the better of a safeprice auction, or a truthful auction that allocates licenses to only a single buyer, will approximate the target benchmark. In particular, we show how to choose a number of licenses and a price floor so that the worst-case welfare, at any equilibrium, is a constant approximation to the best achievable welfare under truth-telling after excluding the welfare contribution of a single buyer. 
    more » « less
  4. We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller’s true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer’s budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer’s valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work 
    more » « less
  5. We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest. 
    more » « less