skip to main content


This content will become publicly available on June 27, 2024

Title: Semi-random Impossibilities of Condorcet Criterion

The Condorcet criterion (CC) is a classical and well-accepted criterion for voting. Unfortunately, it is incompatible with many other desiderata including participation (PAR), half-way monotonicity (HM), Maskin monotonicity (MM), and strategy-proofness (SP). Such incompatibilities are often known as impossibility theorems, and are proved by worst-case analysis. Previous work has investigated the likelihood for these impossibilities to occur under certain models, which are often criticized of being unrealistic.We strengthen previous work by proving the first set of semi-random impossibilities for voting rules to satisfy CC and the more general, group versions of the four desiderata: for any sufficiently large number of voters n, any size of the group 1

 
more » « less
Award ID(s):
2007994 2106983
NSF-PAR ID:
10466782
Author(s) / Creator(s):
Publisher / Repository:
AAAI
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
Volume:
37
Issue:
5
ISSN:
2159-5399
Page Range / eLocation ID:
5867 to 5875
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of allocating divisible items among multiple agents, and consider the setting where any agent is allowed to introduce {\emph diversity constraints} on the items they are allocated. We motivate this via settings where the items themselves correspond to user ad slots or task workers with attributes such as race and gender on which the principal seeks to achieve demographic parity. We consider the following question: When an agent expresses diversity constraints into an allocation rule, is the allocation of other agents hurt significantly? If this happens, the cost of introducing such constraints is disproportionately borne by agents who do not benefit from diversity. We codify this via two desiderata capturing {\em robustness}. These are {\emph no negative externality} -- other agents are not hurt -- and {\emph monotonicity} -- the agent enforcing the constraint does not see a large increase in value. We show in a formal sense that the Nash Welfare rule that maximizes product of agent values is {\emph uniquely} positioned to be robust when diversity constraints are introduced, while almost all other natural allocation rules fail this criterion. We also show that the guarantees achieved by Nash Welfare are nearly optimal within a widely studied class of allocation rules. We finally perform an empirical simulation on real-world data that models ad allocations to show that this gap between Nash Welfare and other rules persists in the wild. 
    more » « less
  2. Robustness to distribution shift and fairness have independently emerged as two important desiderata required of modern machine learning models. While these two desiderata seem related, the connection between them is often unclear in practice. Here, we discuss these connections through a causal lens, focusing on anti-causal prediction tasks, where the input to a classifier (e.g., an image) is assumed to be generated as a function of the target label and the protected attribute. By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness - risk invariance. These connections provide new motivation for applying the separation criterion in anticausal settings, and inform old discussions regarding fairness-performance tradeoffs. In addition, our findings suggest that robustness-motivated approaches can be used to enforce separation, and that they often work better in practice than methods designed to directly enforce separation. Using a medical dataset, we empirically validate our findings on the task of detecting pneumonia from X-rays, in a setting where differences in prevalence across sex groups motivates a fairness mitigation. Our findings highlight the importance of considering causal structure when choosing and enforcing fairness criteria. 
    more » « less
  3. Networked observational data presents new opportunities for learning individual causal effects, which plays an indispensable role in decision making. Such data poses the challenge of confounding bias. Previous work presents two desiderata to handle confounding bias. On the treatment group level, we aim to balance the distributions of confounder representations. On the individual level, it is desirable to capture patterns of hidden confounders that predict treatment assignments. Existing methods show the potential of utilizing network information to handle confounding bias, but they only try to satisfy one of the two desiderata. This is because the two desiderata seem to contradict each other. When the two distributions of confounder representations are highly overlapped, then we confront the undiscriminating problem between the treated and the controlled. In this work, we formulate the two desiderata as a minimax game. We propose IGNITE that learns representations of confounders from networked observational data, which is trained by a minimax game to achieve the two desiderata. Experiments verify the efficacy of IGNITE on two datasets under various settings.

     
    more » « less
  4. In this work, we consider the sample complexity required for testing the monotonicity of distributions over partial orders. A distribution p over a poset is monotone if, for any pair of domain elements x and y such that x ⪯ y, p(x) ≤ p(y). To understand the sample complexity of this problem, we introduce a new property called bigness over a finite domain, where the distribution is T-big if the minimum probability for any domain element is at least T. We establish a lower bound of Ω(n/ log n) for testing bigness of distributions on domains of size n. We then build on these lower bounds to give Ω(n/ log n) lower bounds for testing monotonicity over a matching poset of size n and significantly improved lower bounds over the hypercube poset. We give sublinear sample complexity bounds for testing bigness and for testing monotonicity over the matching poset. We then give a number of tools for analyzing upper bounds on the sample complexity of the monotonicity testing problem. The previous lower bound for testing Monotonicity of 
    more » « less
  5. Abstract

    Estimates of population characteristics such as domain means are often expected to follow monotonicity assumptions. Recently, a method to adaptively pool neighbouring domains was proposed, which ensures that the resulting domain mean estimates follow monotone constraints. The method leads to asymptotically valid estimation and inference, and can lead to substantial improvements in efficiency, in comparison with unconstrained domain estimators. However, assuming incorrect shape constraints may lead to biased estimators. Here, we develop the Cone Information Criterion for Survey Data as a diagnostic method to measure monotonicity departures on population domain means. We show that the criterion leads to a consistent methodology that makes an asymptotically correct decision choosing between unconstrained and constrained domain mean estimators.The Canadian Journal of Statistics47: 315–331; 2019 © 2019 Statistical Society of Canada

     
    more » « less