Collaboration is crucial for reaching collective goals. However, its effectiveness is often undermined by the strategic behavior of individual agents -- a fact that is captured by a high Price of Stability (PoS) in recent literature [Blum et al., 2021]. Implicit in the traditional PoS analysis is the assumption that agents have full knowledge of how their tasks relate to one another. We offer a new perspective on bringing about efficient collaboration among strategic agents using information design. Inspired by the growing importance of collaboration in machine learning (such as platforms for collaborative federated learning and data cooperatives), we propose a framework where the platform has more information about how the agents' tasks relate to each other than the agents themselves. We characterize how and to what degree such platforms can leverage their information advantage to steer strategic agents toward efficient collaboration. Concretely, we consider collaboration networks where each node is a task type held by one agent, and each task benefits from contributions made in their inclusive neighborhood of tasks. This network structure is known to the agents and the platform, but only the platform knows each agent's real location -- from the agents' perspective, their location is determined by a random permutation. We employ private Bayesian persuasion and design two families of persuasive signaling schemes that the platform can use to ensure a small total workload when agents follow the signal. The first family aims to achieve the minmax optimal approximation ratio compared to the optimal collaboration, which is shown to be Θ(n‾√) for unit-weight graphs, Θ(n2/3) for graphs with constant minimum edge weights, and O(n3/4) for general weighted graphs. The second family ensures per-instance strict improvement compared to full information disclosure.
more »
« less
Counterfactuals with Latent Information
We describe a methodology for making counterfactual predictions in settings where the information held by strategic agents and the distribution of payoff-relevant states of the world are unknown. The analyst observes behavior assumed to be rationalized by a Bayesian model, in which agents maximize expected utility, given partial and differential information about the state. A counterfactual prediction is desired about behavior in another strategic setting, under the hypothesis that the distribution of the state and agents’ information about the state are held fixed. When the data and the desired counterfactual prediction pertain to environments with finitely many states, players, and actions, the counterfactual prediction is described by finitely many linear inequalities, even though the latent parameter, the information structure, is infinite dimensional. (JEL D44, D82, D83)
more »
« less
- Award ID(s):
- 2049754
- PAR ID:
- 10337509
- Date Published:
- Journal Name:
- American Economic Review
- Volume:
- 112
- Issue:
- 1
- ISSN:
- 0002-8282
- Page Range / eLocation ID:
- 343 to 368
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Collaboration is crucial for reaching collective goals. However, its effectiveness is often undermined by the strategic behavior of individual agents -- a fact that is captured by a high Price of Stability (PoS) in recent literature [Blum et al., 2021]. Implicit in the traditional PoS analysis is the assumption that agents have full knowledge of how their tasks relate to one another. We offer a new perspective on bringing about efficient collaboration among strategic agents using information design. Inspired by the growing importance of collaboration in machine learning (such as platforms for collaborative federated learning and data cooperatives), we propose a framework where the platform has more information about how the agents' tasks relate to each other than the agents themselves. We characterize how and to what degree such platforms can leverage their information advantage to steer strategic agents toward efficient collaboration. Concretely, we consider collaboration networks where each node is a task type held by one agent, and each task benefits from contributions made in their inclusive neighborhood of tasks. This network structure is known to the agents and the platform, but only the platform knows each agent's real location -- from the agents' perspective, their location is determined by a random permutation. We employ private Bayesian persuasion and design two families of persuasive signaling schemes that the platform can use to ensure a small total workload when agents follow the signal. The first family aims to achieve the minmax optimal approximation ratio compared to the optimal collaboration, which is shown to be Θ(n‾√) for unit-weight graphs, Θ(n2/3) for graphs with constant minimum edge weights, and O(n3/4) for general weighted graphs. The second family ensures per-instance strict improvement compared to full information disclosure.more » « less
-
Two-sided matching markets have long existed to pair agents in the absence of regulated exchanges. A common example is school choice, where a matching mechanism uses student and school preferences to assign students to schools. In such settings, forming preferences is both difficult and critical. Prior work has suggested various prediction mechanisms that help agents make decisions about their preferences. Although often deployed together, these matching and prediction mechanisms are almost always analyzed separately. The present work shows that at the intersection of the two lies a previously unexplored type of strategic behavior: agents returning to the market (e.g., schools) can attack future predictions by interacting short-term non-optimally with their matches. Here, we first introduce this type of strategic behavior, which we call an adversarial interaction attack. Next, we construct a formal economic model that captures the feedback loop between prediction mechanisms designed to assist agents and the matching mechanism used to pair them. Finally, in a simplified setting, we prove that returning agents can benefit from using adversarial interaction attacks and gain progressively more as the trust in and accuracy of predictions increases. We also show that this attack increases inequality in the student population.more » « less
-
Abstract We develop a novel bounded rationality model of imperfect reasoning as the interaction between automatic (System 1) and analytical (System 2) thinking. In doing so, we formalize the empirical consensus of cognitive psychology using a structural, constrained-optimal economic framework of mental information acquisition about the unknown optimal policy function. A key result is that agents reason less (more) when facing usual (unusual) states of the world, producing state- and history-dependent behavior. Our application is an otherwise standard incomplete-markets model with no a priori behavioral biases. The ergodic distribution of actions and beliefs is characterized by endogenous learning traps, where locally stable state dynamics generate familiar regions of the state space within which behavior appears to follow memory-based heuristics. This results in endogenous behavioral biases that have many empirically desirable properties: the marginal propensity to consume is high even for unconstrained agents, hand-to-mouth status is more frequent and persistent, and there is more wealth inequality than in the standard model.more » « less
-
null (Ed.)We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long‐run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents' actions over time, where agents' actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long‐run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long‐run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents' payoff‐relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents' learning environment. The key feature behind our findings is that agents' belief‐updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results.more » « less