skip to main content


Title: Metric Distortion of Social Choice Rules: Lower Bounds and Fairness Properties
We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents' costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alternatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 (Anshelevich et al. 2015). We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst-case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.  more » « less
Award ID(s):
1637397
NSF-PAR ID:
10026403
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
EC '17 Proceedings of the 2017 ACM Conference on Economics and Computation
Page Range / eLocation ID:
287 to 304
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of m alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent’s ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule’s performance and the best performance achievable given the cardinal preferences. The work on the distortion of voting rules has been largely divided into two “worlds”: utilitarian distortion and metric distortion. In the former, the cardinal preferences of the agents correspond to general utilities and the goal is to maximize a normalized social welfare. In the latter, the agents’ cardinal preferences correspond to costs given by distances in an underlying metric space and the goal is to minimize the (unnormalized) social cost. Several deterministic and randomized voting rules have been proposed and evaluated for each of these worlds separately, gradually improving the achievable distortion bounds, but none of the known voting rules perform well in both worlds simultaneously. In this work, we prove that one can in fact achieve the “best of both worlds” by designing new voting rules, both deterministic and randomized, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds. We also prove that this positive result does not generalize to the case where the voting rule is provided with the rankings of only the top-t alternatives of each agent, for t < m, and study the extent to which such best-of-both-worlds guarantees can be achieved. 
    more » « less
  2. We study social choice mechanisms in an implicit utilitarian framework with a metric constraint, where the goal is to minimize Distortion, the worst case social cost of an ordinal mechanism relative to underlying cardinal utilities. We consider two additional desiderata: Constant sample complexity and Squared Distortion. Constant sample complexity means that the mechanism (potentially randomized) only uses a constant number of ordinal queries regardless of the number of voters and alternatives. Squared Distortion is a measure of variance of the Distortion of a randomized mechanism.Our primary contribution is the first social choice mechanism with constant sample complexity and constant Squared Distortion (which also implies constant Distortion). We call the mechanism Random Referee, because it uses a random agent to compare two alternatives that are the favorites of two other random agents. We prove that the use of a comparison query is necessary: no mechanism that only elicits the top-k preferred alternatives of voters (for constant k) can have Squared Distortion that is sublinear in the number of alternatives. We also prove that unlike any top-k only mechanism, the Distortion of Random Referee meaningfully improves on benign metric spaces, using the Euclidean plane as a canonical example. Finally, among top-1 only mechanisms, we introduce Random Oligarchy. The mechanism asks just 3 queries and is essentially optimal among the class of such mechanisms with respect to Distortion.In summary, we demonstrate the surprising power of constant sample complexity mechanisms generally, and just three random voters in particular, to provide some of the best known results in the implicit utilitarian framework. 
    more » « less
  3. Suppose that we have $n$ agents and $n$ items which lie in a shared metric space. We would like to match the agents to items such that the total distance from agents to their matched items is as small as possible. However, instead of having direct access to distances in the metric, we only have each agent's ranking of the items in order of distance. Given this limited information, what is the minimum possible worst-case approximation ratio (known as the \emph{distortion}) that a matching mechanism can guarantee? Previous work by \citet{CFRF+16} proved that the (deterministic) Serial Dictatorship mechanism has distortion at most $2^n - 1$. We improve this by providing a simple deterministic mechanism that has distortion $O(n^2)$. We also provide the first nontrivial lower bound on this problem, showing that any matching mechanism (deterministic or randomized) must have worst-case distortion $\Omega(\log n)$. In addition to these new bounds, we show that a large class of truthful mechanisms derived from Deferred Acceptance all have worst-case distortion at least $2^n - 1$, and we find an intriguing connection between \emph{thin matchings} (analogous to the well-known thin trees conjecture) and the distortion gap between deterministic and randomized mechanisms. 
    more » « less
  4. We consider a social choice setting with agents that are partitioned into disjoint groups, and have metric preferences over a set of alternatives. Our goal is to choose a single alternative aiming to optimize various objectives that are functions of the distances between agents and alternatives in the metric space, under the constraint that this choice must be made in a distributed way: The preferences of the agents within each group are first aggregated into a representative alternative for the group, and then these group representatives are aggregated into the final winner. Deciding the winner in such a way naturally leads to loss of efficiency, even when complete information about the metric space is available. We provide a series of (mostly tight) bounds on the distortion of distributed mechanisms for variations of well-known objectives, such as the (average) total cost and the maximum cost, and also for new objectives that are particularly appropriate for this distributed setting and have not been studied before. 
    more » « less
  5. Sometimes agents care not only about the outcomes of collective decisions but also about how decisions are made. Both the outcome and the procedure affect whether agents see a decision as legitimate or acceptable. We focus on incorporating agents’ preferences over decision-making processes into the process itself. Taking whole decisions, including decision rules and outcomes, to be the object of agent preferences rather than only decision outcomes, we (1) identify natural, plausible preference structures and key properties, (2) develop general mechanisms for aggregating these preferences to maximize the acceptability of decisions, and (3) analyze the performance of our acceptance-maximizing mechanisms. We apply our general approach to the setting of dichotomous choice, and compare the worst-case rates of acceptance achievable among populations of agents of different types. We include the special case of rule selection, or amendment, and show that amendment procedures proposed by Abramowitz et al. [2] achieve universal acceptance with certain agent types. 
    more » « less