skip to main content


Title: Learning from Neighbours about a Changing State
Abstract Agents learn about a changing state using private signals and their neighbours’ past estimates of the state. We present a model in which Bayesian agents in equilibrium use neighbours’ estimates simply by taking weighted sums with time-invariant weights. The dynamics thus parallel those of the tractable DeGroot model of learning in networks, but arise as an equilibrium outcome rather than a behavioural assumption. We examine whether information aggregation is nearly optimal as neighbourhoods grow large. A key condition for this is signal diversity: each individual’s neighbours have private signals that not only contain independent information, but also have sufficiently different distributions. Without signal diversity—e.g. if private signals are i.i.d.—learning is suboptimal in all networks and highly inefficient in some. Turning to social influence, we find it is much more sensitive to one’s signal quality than to one’s number of neighbours, in contrast to standard models with exogenous updating rules.  more » « less
Award ID(s):
1847860
NSF-PAR ID:
10455013
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Review of Economic Studies
Volume:
90
Issue:
5
ISSN:
0034-6527
Page Range / eLocation ID:
2326 to 2369
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider two-alternative elections where voters' preferences depend on a state variable that is not directly observable. Each voter receives a private signal that is correlated to the state variable. As a special case, our model captures the common scenario where voters can be categorized into three types: those who always prefer one alternative, those who always prefer the other, and those contingent voters whose preferences depends on the state. In this setting, even if every voter is a contingent voter, agents voting according to their private information need not result in the adoption of the universally preferred alternative, because the signals can be systematically biased.We present a mechanism that elicits and aggregates the private signals from the voters, and outputs the alternative that is favored by the majority. In particular, voters truthfully reporting their signals forms a strong Bayes Nash equilibrium (where no coalition of voters can deviate and receive a better outcome). 
    more » « less
  2. We study how long‐lived, rational agents learn in a social network. In every period, after observing the past actions of his neighbors, each agent receives a private signal, and chooses an action whose payoff depends only on the state. Since equilibrium actions depend on higher‐order beliefs, it is difficult to characterize behavior. Nevertheless, we show that regardless of the size and shape of the network, the utility function, and the patience of the agents, the speed of learning in any equilibrium is bounded from above by a constant that only depends on the private signal distribution. 
    more » « less
  3. We study distributed estimation and learning problems in a networked environment where agents exchange information to estimate unknown statistical properties of random variables from their privately observed samples. The agents can collectively estimate the unknown quantities by exchanging information about their private observations, but they also face privacy risks. Our novel algorithms extend the existing distributed estimation literature and enable the participating agents to estimate a complete sufficient statistic from private signals acquired offline or online over time and to preserve the privacy of their signals and network neighborhoods. This is achieved through linear aggregation schemes with adjusted randomization schemes that add noise to the exchanged estimates subject to differential privacy (DP) constraints, both in an offline and online manner. We provide convergence rate analysis and tight finite-time convergence bounds. We show that the noise that minimizes the convergence time to the best estimates is the Laplace noise, with parameters corresponding to each agent’s sensitivity to their signal and network characteristics. Our algorithms are amenable to dynamic topologies and balancing privacy and accuracy trade-offs. Finally, to supplement and validate our theoretical results, we run experiments on real-world data from the US Power Grid Network and electric consumption data from German Households to estimate the average power consumption of power stations and households under all privacy regimes and show that our method outperforms existing first-order privacy-aware distributed optimization methods. 
    more » « less
  4. We study the voting game where agents' preferences are endogenously decided by the information they receive, and they can collaborate in a group. We show that strategic voting behaviors have a positive impact on leading to the "correct" decision, outperforming the common non-strategic behavior of informative voting and sincere voting. Our results give merit to strategic voting for making good decisions. To this end, we investigate a natural model, where voters' preferences between two alternatives depend on a discrete state variable that is not directly observable. Each voter receives a private signal that is correlated with the state variable. We reveal a surprising equilibrium between a strategy profile being a strong equilibrium and leading to the decision favored by the majority of agents conditioned on them knowing the ground truth (referred to as the informed majority decision): as the size of the vote goes to infinity, every ε-strong Bayes Nash Equilibrium with ε converging to 0 formed by strategic agents leads to the informed majority decision with probability converging to 1. On the other hand, we show that informative voting leads to the informed majority decision only under unbiased instances, and sincere voting leads to the informed majority decision only when it also forms an equilibrium. 
    more » « less
  5. null (Ed.)
    We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long‐run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents' actions over time, where agents' actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long‐run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long‐run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents' payoff‐relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents' learning environment. The key feature behind our findings is that agents' belief‐updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results. 
    more » « less