We consider an InterDependent Security (IDS) game with networked agents and positive externality where each agent chooses an effort/investment level for securing itself. The agents are interdependent in that the state of security of one agent depends not only on its own investment but also on the other agents' effort/investment. Due to the positive externality, the agents under-invest in security which leads to an inefficient Nash equilibrium (NE). While much has been analyzed in the literature on the under-investment issue, in this study we take a different angle. Specifically, we consider the possibility of allowing agents to pool their resources, i.e., allowing agents to have the ability to both invest in themselves as well as in other agents. We show that the interaction of strategic and selfish agents under resource pooling (RP) improves the agents' effort/investment level as well as their utility as compared to a scenario without resource pooling. We show that the social welfare (total utility) at the NE of the game with resource pooling is higher than the maximum social welfare attainable in a game without resource pooling but by using an optimal incentive mechanism. Furthermore, we show that while voluntary participation in this latter scenario is not generally true, it is guaranteed under resource pooling.
more »
« less
This content will become publicly available on May 1, 2026
You Get What You Give: Reciprocally Fair Federated Learning
Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as free-riding, resulting in sub-optimal fairness, data sharing, and welfare. To address this, we design MSHAP, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves reciprocal fairness: where each agent's payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under MSHAP has desirable guarantees in terms of accuracy, welfare, and total data collected. We validate our theoretical results through experiments, demonstrating that MSHAP outperforms baselines in terms of fairness and efficiency.
more »
« less
- Award ID(s):
- 2441580
- PAR ID:
- 10656831
- Publisher / Repository:
- Proceedings of Machine Learning Research (PMLR)
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We investigate the fair allocation of indivisible goods to agents with possibly different entitlements represented by weights. Previous work has shown that guarantees for additive valuations with existing envy-based notions cannot be extended to the case where agents have matroid-rank (i.e., binary submodular) valuations. We propose two families of envy-based notions for matroid-rank and general submodular valuations, one based on the idea of transferability and the other on marginal values. We show that our notions can be satisfied via generalizations of rules such as picking sequences and maximum weighted Nash welfare. In addition, we introduce welfare measures based on harmonic numbers, and show that variants of maximum weighted harmonic welfare offer stronger fairness guarantees than maximum weighted Nash welfare under matroid-rank valuations.more » « less
-
Vorobeychik, Y; Das, S; Nowe, A (Ed.)We study the problem of fair allocation of indivisible items when agents have ternary additive valuations --- each agent values each item at some fixed integer values a, b, or c that are common to all agents. The notions of fairness we consider are max Nash welfare (MNW), when a, b, and c are non-negative, and max egalitarian welfare (MEW). We show that for any distinct non-negative a, b, and c, maximizing Nash welfare is APX-hard --- i.e., the problem does not admit a PTAS unless P = NP. We also show that for any distinct a, b, and c, maximizing egalitarian welfare is APX-hard except for a few cases when b = 0 that admit efficient algorithms. These results make significant progress towards completely characterizing the complexity of computing exact MNW allocations and MEW allocations. En route, we resolve open questions left by prior work regarding the complexity of computing MNW allocations under bivalued valuations, and MEW allocations under ternary mixed manna.more » « less
-
We study the problem of distributing a set of indivisible goods among agents with additive valuations in afairmanner. The fairness notion under consideration is envy-freeness up toanygood (EFX). Despite significant efforts by many researchers for several years, the existence of EFX allocations has not been settled beyond the simple case of two agents. In this article, we show constructively that an EFX allocation always exists for three agents. Furthermore, we falsify the conjecture of Caragiannis et al. by showing an instance with three agents for which there is a partial EFX allocation (some goods are not allocated) with higher Nash welfare than that of any complete EFX allocation.more » « less
-
null (Ed.)Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.more » « less
An official website of the United States government
