skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Local non-Bayesian social learning with stubborn agents
We study a social learning model in which agents iteratively update their beliefs about the true state of the world using private signals and the beliefs of other agents in a non-Bayesian manner. Some agents are stubborn, meaning they attempt to convince others of an erroneous true state (modeling fake news). We show that while agents learn the true state on short timescales, they "forget" it and believe the erroneous state to be true on longer timescales. Using these results, we devise strategies for seeding stubborn agents so as to disrupt learning, which outperform intuitive heuristics and give novel insights regarding vulnerabilities in social learning.  more » « less
Award ID(s):
2008130 1608361 2038416 1955777
PAR ID:
10332927
Author(s) / Creator(s):
;
Editor(s):
Altafini, Claudio; Como, Giacomo; Hendrickx, Julien M.; Olshevsky, Alexander; Tahbez-Salehi, Alireza
Date Published:
Journal Name:
IEEE Transactions on Control of Network Systems
ISSN:
2372-2533
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long‐run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents' actions over time, where agents' actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long‐run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long‐run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents' payoff‐relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents' learning environment. The key feature behind our findings is that agents' belief‐updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results. 
    more » « less
  2. We provide a compressive-measurement based method to detect susceptible agents who may receive misinformation through their contact with ‘stubborn agents’ whose goal is to influence the opinions of agents in the network. We consider a DeGroot-type opinion dynamics model where regular agents revise their opinions by linearly combining their neighbors’ opinions, but stubborn agents, while influencing others, do not change their opinions. Our proposed method hinges on estimating the temporal difference vector of network-wide opinions, computed at time instances when the stubborn agents interact. We show that this temporal difference vector has approximately the same support as the locations of the susceptible agents. Moreover, both the interaction instances and the temporal difference vector can be estimated from a small number of aggregated opinions. The performance of our method is studied both analytically and empirically. We show that the detection error decreases when the social network is better connected, or when the stubborn agents are ‘less talkative’. 
    more » « less
  3. We present a novel experiment demonstrating strategies selfish individuals utilize to avoid social pressure to be altruistic. Subjects participate in a trust game, after which they have an opportunity to state their beliefs about their opponent's actions. Subsequently, subjects participate in a task designed to “reveal” their true beliefs. Subjects who initially made selfish choices falsely state their beliefs about their opponent's kindness. Their “revealed” beliefs were significantly more accurate, which exposed subjects' knowledge that their selfishness was unjustifiable by their opponent's behavior. The initial false statements complied with social norms, suggesting subjects' attempts to project a more favorable social image. (JELC9, D03, D83) 
    more » « less
  4. This work explores sequential Bayesian binary hypothesis testing in the social learning setup under expertise diversity. We consider a two-agent (say advisor-learner) sequential binary hypothesis test where the learner infers the hypothesis based on the decision of the advisor, a prior private signal, and individual belief. In addition, the agents have varying expertise, in terms of the noise variance in the private signal. Under such a setting, we first investigate the behavior of optimal agent beliefs and observe that the nature of optimal agents could be inverted depending on expertise levels. We also discuss suboptimality of the Prelec reweighting function under diverse expertise. Next, we consider an advisor selection problem wherein the belief of the learner is fixed and the advisor is to be chosen for a given prior. We characterize the decision region for choosing such an advisor and argue that a learner with beliefs varying from the true prior often ends up selecting a suboptimal advisor. 
    more » « less
  5. Abstract We present an approach to analyse learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e. from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyse environments where learning is “slow”, such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning. 
    more » « less