skip to main content


Title: Inspiring Regime Change
Abstract

We consider the problem of a leader who can assign rewards for citizens for different anti-regime actions. Citizens face a coordination problem in which each citizen has a private, endogenous degree of optimism about the likelihood of regime change. Because more optimistic citizens are easier to motivate, the choice of optimal rewards entails optimal screening. This leads to a distribution of anti-regime actions. A key result is the emergence of a vanguard, consisting of citizens who engage in the endogenous, maximum level of action. Other citizens participate at varying degrees, with less optimistic citizens contributing less. We explore how the regime’s strength or the maximum reward available to the leader influences the distribution of actions. Moreover, we show that more heterogeneity (e.g., higher inequality) among potential revolutionaries reduces the likelihood of regime change. Our methodological contribution is that we deliver a sharp and novel marriage of screening and global games.

 
more » « less
NSF-PAR ID:
10478801
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Journal of the European Economic Association
Volume:
21
Issue:
6
ISSN:
1542-4766
Format(s):
Medium: X Size: p. 2635-2681
Size(s):
["p. 2635-2681"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We study a class of bilevel spanning tree (BST) problems that involve two independent decision‐makers (DMs), the leader and the follower with different objectives, who jointly construct a spanning tree in a graph. The leader, who acts first, selects an initial subset of edges that do not contain a cycle, from the set under her control. The follower then selects the remaining edges to complete the construction of a spanning tree, but optimizes his own objective function. If there exist multiple optimal solutions for the follower that result in different objective function values for the leader, then the follower may choose either the one that is the most (optimistic version) or least (pessimistic version) favorable to the leader. We study BST problems with the sum‐ and bottleneck‐type objective functions for the DMs under both the optimistic and pessimistic settings. The polynomial‐time algorithms are then proposed in both optimistic and pessimistic settings for BST problems in which at least one of the DMs has the bottleneck‐type objective function. For BST problem with the sum‐type objective functions for both the leader and the follower, we provide an equivalent single‐level linear mixed‐integer programming formulation. A computational study is then presented to explore the efficacy of our reformulation.

     
    more » « less
  2. null (Ed.)
    We study the following problem, which to our knowledge has been addressed only partially in the literature and not in full generality. An agent observes two players play a zero-sum game that is known to the players but not the agent. The agent observes the actions and state transitions of their game play, but not rewards. The players may play either op-timally (according to some Nash equilibrium) or according to any other solution concept, such as a quantal response equilibrium. Following these observations, the agent must recommend a policy for one player, say Player 1. The goal is to recommend a policy that is minimally exploitable un-der the true, but unknown, game. We take a Bayesian ap-proach. We establish a likelihood function based on obser-vations and the specified solution concept. We then propose an approach based on Markov chain Monte Carlo (MCMC), which allows us to approximately sample games from the agent’s posterior belief distribution. Once we have a batch of independent samples from the posterior, we use linear pro-gramming and backward induction to compute a policy for Player 1 that minimizes the sum of exploitabilities over these games. This approximates the policy that minimizes the ex-pected exploitability under the full distribution. Our approach is also capable of handling counterfactuals, where known modifications are applied to the unknown game. We show that our Bayesian MCMC-based technique outperforms two other techniques—one based on the equilibrium policy of the maximum-probability game and the other based on imitation of observed behavior—on all the tested stochastic game envi-ronments. 
    more » « less
  3. Abstract Objective

    Findings from prior research on reward sensitivity in nonsuicidal self‐injury (NSSI) have been mixed. Childhood maltreatment is an independent risk factor for NSSI and for hyposensitivity to rewards. This study aimed to disentangle the role of reward sensitivity as a predictor of NSSI for those with an elevated severity of childhood maltreatment.

    Method

    In a diverse undergraduate sample (N = 586), trait reward sensitivity (i.e., behavioral approach system subscales) and the severity of maltreatment were assessed as predictors of a lifetime history of NSSI. In a subset of this sample (n = 51), predictors of NSSI urge intensity were measured using ecological momentary assessment.

    Results

    Individuals with elevated maltreatment who reported less positive responsiveness to rewards were more likely to have a lifetime history of NSSI. Those with elevated maltreatment who reported alowerlikelihood to approach rewards experienced more intense NSSI urges across the ten‐day observation period. However, those with elevated maltreatment who reported agreaterlikelihood to approach rewards experienced less intense NSSI urges.

    Conclusions

    The role of reward sensitivity as a cognitive risk factor for NSSI varies depending on childhood maltreatment history. Findings indicate that, for those with elevated maltreatment, hypersensitivity to approaching rewards may decrease risk for NSSI urges.

     
    more » « less
  4. Deep generative models parametrized up to a normalizing constant (e.g. energy-based models) are difficult to train by maximizing the likelihood of the data because the likelihood and/or gradients thereof cannot be explicitly or efficiently written down. Score matching is a training method, whereby instead of fitting the likelihood for the training data, we instead fit the score function, obviating the need to evaluate the partition function. Though this estimator is known to be consistent, it's unclear whether (and when) its statistical efficiency is comparable to that of maximum likelihood---which is known to be (asymptotically) optimal. We initiate this line of inquiry in this paper and show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated---i.e. the Poincar\'e, log-Sobolev and isoperimetric constant---quantities which govern the mixing time of Markov processes like Langevin dynamics. Roughly, we show that the score matching estimator is statistically comparable to the maximum likelihood when the distribution has a small isoperimetric constant. Conversely, if the distribution has a large isoperimetric constant---even for simple families of distributions like exponential families with rich enough sufficient statistics---score matching will be substantially less efficient than maximum likelihood. We suitably formalize these results both in the finite sample regime, and in the asymptotic regime. Finally, we identify a direct parallel in the discrete setting, where we connect the statistical properties of pseudolikelihood estimation with approximate tensorization of entropy and the Glauber dynamics. 
    more » « less
  5. Context

    US states are largely responsible for the regulation of firearms within their borders. Each state has developed a different legal environment with regard to firearms based on different values and beliefs of citizens, legislators, governors, and other stakeholders. Predicting the types of firearm laws that states may adopt is therefore challenging.

    Methods

    We propose a parsimonious model for this complex process and provide credible predictions of state firearm laws by estimating the likelihood they will be passed in the future. We employ a temporal exponential‐family random graph model to capture the bipartite state law–state network data over time, allowing for complex interdependencies and their temporal evolution. Using data on all state firearm laws over the period 1979–2020, we estimate these models’ parameters while controlling for factors associated with firearm law adoption, including internal and external state characteristics. Predictions of future firearm law passage are then calculated based on a number of scenarios to assess the effects of a given type of firearm law being passed in the future by a given state.

    Findings

    Results show that a set of internal state factors are important predictors of firearm law adoption, but the actions of neighboring states may be just as important. Analysis of scenarios provide insights into the mechanics of how adoption of laws by specific states (or groups of states) may perturb the rest of the network structure and alter the likelihood that new laws would become more (or less) likely to continue to diffuse to other states.

    Conclusions

    The methods used here outperform standard approaches for policy diffusion studies and afford predictions that are superior to those of an ensemble of machine learning tools. The proposed framework could have applications for the study of policy diffusion in other domains.

     
    more » « less