Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 1, 2025
-
In this work, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms.” Aiming to complement the traditional worst-case analysis approach in computer science, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions. The algorithms can use the predictions as a guide to inform their decisions, aiming to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are inaccurate (robustness). We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well, even when the predictions are not fully accurate.
Funding: The work of E. Balkanski was supported in part by the National Science Foundation [Grants CCF-2210501 and IIS-2147361]. The work of V. Gkatzelis and X. Tan was supported in part by the National Science Foundation [Grant CCF-2210502] and [CAREER Award CCF-2047907].
Free, publicly-accessible full text available December 27, 2024 -
We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.more » « less
-
We study the problem of designing voting rules that take as input the ordinal preferences of n agents over a set of m alternatives and output a single alternative, aiming to optimize the overall happiness of the agents. The input to the voting rule is each agent’s ranking of the alternatives from most to least preferred, yet the agents have more refined (cardinal) preferences that capture the intensity with which they prefer one alternative over another. To quantify the extent to which voting rules can optimize over the cardinal preferences given access only to the ordinal ones, prior work has used the distortion measure, i.e., the worst-case approximation ratio between a voting rule’s performance and the best performance achievable given the cardinal preferences. The work on the distortion of voting rules has been largely divided into two “worlds”: utilitarian distortion and metric distortion. In the former, the cardinal preferences of the agents correspond to general utilities and the goal is to maximize a normalized social welfare. In the latter, the agents’ cardinal preferences correspond to costs given by distances in an underlying metric space and the goal is to minimize the (unnormalized) social cost. Several deterministic and randomized voting rules have been proposed and evaluated for each of these worlds separately, gradually improving the achievable distortion bounds, but none of the known voting rules perform well in both worlds simultaneously. In this work, we prove that one can in fact achieve the “best of both worlds” by designing new voting rules, both deterministic and randomized, that simultaneously achieve near-optimal distortion guarantees in both distortion worlds. We also prove that this positive result does not generalize to the case where the voting rule is provided with the rankings of only the top-t alternatives of each agent, for t < m, and study the extent to which such best-of-both-worlds guarantees can be achieved.more » « less
-
We design online algorithms for the fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm’s performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L) ·T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.more » « less
-
We design online algorithms for fair allocation of public goods to a set of N agents over a sequence of T rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, and every agent reveals their value for it upon arrival. The algorithm must irrevocably decide the investment in this good without exceeding a total budget of B across all rounds. The algorithm can utilize (potentially noisy) predictions of each agent’s total value for all remaining goods. The algorithm's performance is measured using a proportional fairness objective, which informally demands that every group of agents be rewarded proportional to its size and the cohesiveness of its preferences. We show that no algorithm can achieve better than Θ(T/B) proportional fairness without predictions. With reasonably accurate predictions, the situation improves significantly, and Θ(log(T/B)) proportional fairness is achieved. We also extend our results to a general setting wherein a batch of L public goods arrive in each round and O(log(min(N,L)T/B)) proportional fairness is achieved. Our exact bounds are parameterized as a function of the prediction error, with performance degrading gracefully with increasing errors.