In social choice, traditional Kemeny rank aggregation combines the preferences of voters, expressed as rankings, into a single consensus ranking without consideration for how this ranking may unfairly affect marginalized groups (i.e., racial or gender). Developing fair rank aggregation methods is critical due to their societal influence in applications prioritizing job applicants, funding proposals, and scheduling medical patients. In this work, we introduce the Fair Exposure Kemeny Aggregation Problem (FairExp-kap) for combining vast and diverse voter preferences into a single ranking that is not only a suitable consensus, but ensures opportunities are not withheld from marginalized groups. In formalizing FairExp-kap, we extend the fairness of exposure notion from information retrieval to the rank aggregation context and present a complimentary metric for voter preference representation. We design algorithms for solving FairExp-kap that explicitly account for position bias, a common ranking-based concern that end-users pay more attention to higher ranked candidates. epik solves FairExp-kap exactly by incorporating non-pairwise fairness of exposure into the pairwise Kemeny optimization; while the approximate epira is a candidate swapping algorithm, that guarantees ranked candidate fairness. Utilizing comprehensive synthetic simulations and six real-world datasets, we show the efficacy of our approach illustrating that we succeed in mitigating disparate group exposure unfairness in consensus rankings, while maximally representing voter preferences.
more »
« less
Lower Bounds on Kemeny Rank Aggregation with Non-Strict Rankings
Rank aggregation has many applications in computer science, operations research, and group decision-making. This paper introduces lower bounds on the Kemeny aggregation problem when the input rankings are non-strict (with and without ties). It generalizes some of the existing lower bounds for strict rankings to the case of non-strict rankings, and it proposes shortcuts for reducing the run time of these techniques. More specifically, we use Condorcet criterion variations and the Branch & Cut method to accelerate the lower bounding process.
more »
« less
- Award ID(s):
- 1850355
- PAR ID:
- 10315499
- Date Published:
- Journal Name:
- 2021 IEEE Symposium Series on Computational Intelligence
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Preference aggregation mechanisms help decision-makers combine diverse preference rankings produced by multiple voters into a single consensus ranking. Prior work has developed methods for aggregating multiple rankings into a fair consensus over the same set of candidates. Yet few real-world problems present themselves as such precisely formulated aggregation tasks with each voter fully ranking all candidates. Instead, preferences are often expressed as rankings over partial and even disjoint subsets of candidates. For instance, hiring committee members typically opt to rank their top choices instead of exhaustively ordering every single job applicant. However, the existing literature does not offer a framework for characterizing nor ensuring group fairness in such partial preference aggregation tasks. Unlike fully ranked settings, partial preferences imply both a selection decision of whom to rank plus an ordering decision of how to rank the selected candidates. Our work fills this gap by conceptualizing the open problem of fair partial preference aggregation. We introduce an impossibility result for fair selection from partial preferences and design a computational framework showing how we can navigate this obstacle. Inspired by Single Transferable Voting, our proposed solution PreFair produces consensus rankings that are fair in the selection of candidates and also in their relative ordering. Our experimental study demonstrates that PreFair achieves the best performance in this dual fairness objective compared to state-of-the-art alternatives adapted to this new problem while still satisfying voter preferences.more » « less
-
Combining the preferences of many rankers into one single consensus ranking is critical for consequential applications from hiring and admissions to lending. While group fairness has been extensively studied for classification, group fairness in rankings and in particular rank aggregation remains in its infancy. Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute, i.e., they fall into two groups only. Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers while ensuring fair treatment for candidates with multiple protected attributes such as gender, race, and nationality. In this work, we are the first to define and solve this open Multi-attribute Fair Consensus Ranking (MFCR) problem. As a foundation, we design novel group fairness criteria for rankings, called MANI-Rank, ensuring fair treatment of groups defined by individual protected attributes and their intersection. Leveraging the MANI-Rank criteria, we develop a series of algorithms that for the first time tackle the MFCR problem. Our experimental study with a rich variety of consensus scenarios demonstrates our MFCR methodology is the only approach to achieve both intersectional and protected attribute fairness while also representing the preferences expressed through many base rankings. Our real-world case study on merit scholarships illustrates the effectiveness of our MFCR methods to mitigate bias across multiple protected attributes and their intersections.more » « less
-
null (Ed.)Rank aggregation is widely used in group decision making and many other applications, where it is of interest to consolidate heterogeneous ordered lists. Oftentimes, these rankings may involve a large number of alternatives, contain ties, and/or be incomplete, all of which complicate the use of robust aggregation methods. In particular, these characteristics have limited the applicability of the aggregation framework based on the Kemeny-Snell distance, which satisfies key social choice properties that have been shown to engender improved decisions. This work introduces a binary programming formulation for the generalized Kemeny rank aggregation problem—whose ranking inputs may be complete and incomplete, with and without ties. Moreover, it leverages the equivalence of two ranking aggregation problems, namely, that of minimizing the Kemeny-Snell distance and of maximizing the Kendall-τ correlation, to compare the newly introduced binary programming formulation to a modified version of an existing integer programming formulation associated with the Kendall-τ distance. The new formulation has fewer variables and constraints, which leads to faster solution times. Moreover, we develop a new social choice property, the nonstrict extended Condorcet criterion, which can be regarded as a natural extension of the well-known Condorcet criterion and the Extended Condorcet criterion. Unlike its parent properties, the new property is adequate for handling complete rankings with ties. The property is leveraged to develop a structural decomposition algorithm, through which certain large instances of the NP-hard Kemeny rank aggregation problem can be solved exactly in a practical amount of time. To test the practical implications of the new formulation and social choice property, we work with instances constructed from a probabilistic distribution and with benchmark instances from PrefLib, a library of preference data.more » « less
-
Conventionally, unconditional information security has been studied by quantum cryptography although the assumption of an omnipotent eavesdropper is too strict for some realistic implementations. In this paper, we study the realistic secret key distillation over a satellite-to-satellite free space optics channel where we assume a limited-sized aperture eavesdropper (Eve) in the same plane of the legitimate receiver (Bob) and determine the secret key rate (SKR) lower bounds correspondingly. We first study the input power dependency without assumptions on Bob’s detection scheme before optimizing the input power to determine lower bounds as functions of transmission distances, center frequency or Eve aperture radius. Then we calculate analytical expressions regarding the SKR lower bound and upper bound as transmission distance goes to infinity. We also incorporate specific discrete variable (DV) and continuous variable (CV) protocols for comparison. We demonstrate that significantly higher SKR lower bounds can be achieved compared to traditional unrestricted Eve scenario.more » « less
An official website of the United States government

