skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Community formation in wealth-mediated thermodynamic strategy evolution
We study a dynamical system defined by a repeated game on a 1D lattice, in which the players keep track of their gross payoffs over time in a bank. Strategy updates are governed by a Boltzmann distribution, which depends on the neighborhood bank values associated with each strategy, relative to a temperature scale, which defines the random fluctuations. Players with higher bank values are, thus, less likely to change strategy than players with a lower bank value. For a parameterized rock–paper–scissors game, we derive a condition under which communities of a given strategy form with either fixed or drifting boundaries. We show the effect of a temperature increase on the underlying system and identify surprising properties of this model through numerical simulations.  more » « less
Award ID(s):
1814876
PAR ID:
10363854
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Chaos: An Interdisciplinary Journal of Nonlinear Science
Volume:
32
Issue:
10
ISSN:
1054-1500
Page Range / eLocation ID:
103103
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Foldit is a citizen science video game in which players tackle a variety of complex biochemistry puzzles. Here, we describe a new series of puzzles in which Foldit players improve the accuracy of the public repository of experimental protein structure models, the Protein Data Bank (PDB). Analyzing the results of these puzzles showed that the Foldit players were able to considerably improve the deposited structures and thus, in most cases, improved the output of the automated PDB-REDO refinement pipeline. These improved structures are now being hosted at PDB-REDO. These efforts highlight the continued need for the engagement of the lay population in science. 
    more » « less
  2. Mixed strategies are often evaluated based on the expected payoff that they guarantee. This is not always desirable. In this paper, we consider games for which maximizing the expected payoff deviates from the actual goal of the players. To address this issue, we introduce the notion of a (u,p)-maxmin strategy which ensures receiving a minimum utility of u with probability at least p. We then give approximation algorithms for the problem of finding a (u, p)-maxmin strategy for these games. The first game that we consider is Colonel Blotto, a well-studied game that was introduced in 1921. In the Colonel Blotto game, two colonels divide their troops among a set of battlefields. Each battlefield is won by the colonel that puts more troops in it. The payoff of each colonel is the weighted number of battlefields that she wins. We show that maximizing the expected payoff of a player does not necessarily maximize her winning probability for certain applications of Colonel Blotto. For example, in presidential elections, the players’ goal is to maximize the probability of winning more than half of the votes, rather than maximizing the expected number of votes that they get. We give an exact algorithm for a natural variant of continuous version of this game. More generally, we provide constant and logarithmic approximation algorithms for finding (u, p)-maxmin strategies. We also introduce a security game version of Colonel Blotto which we call auditing game. It is played between two players, a defender and an attacker. The goal of the defender is to prevent the attacker from changing the outcome of an instance of Colonel Blotto. Again, maximizing the expected payoff of the defender is not necessarily optimal. Therefore we give a constant approximation for (u, p)-maxmin strategies. 
    more » « less
  3. Most of the cybersecurity research focus on either presenting a specific vulnerability %or hacking technique, or proposing a specific defense algorithm to defend against a well-defined attack scheme. Although such cybersecurity research is important, few have paid attention to the dynamic interactions between attackers and defenders, where both sides are intelligent and will dynamically change their attack or defense strategies in order to gain the upper hand over their opponents. This 'cyberwar' phenomenon exists among most cybersecurity incidents in the real world, which warrants special research and analysis. In this paper, we propose a dynamic game theoretic framework (i.e., hyper defense) to analyze the interactions between the attacker and the defender as a non-cooperative security game. The key idea is to model attackers/defenders to have multiple levels of attack/defense strategies that are different in terms of effectiveness, strategy costs, and attack gains/damages. Each player adjusts his strategy based on the strategy's cost, potential attack gain/damage, and effectiveness in anticipating of the opponent's strategy. We study the achievable Nash equilibrium for the attacker-defender security game where the players employ an efficient strategy according to the obtained equilibrium. Furthermore, we present case studies of three different types of network attacks and put forth how our hyper defense system can successfully model them. Simulation results show that the proposed game theoretical system achieves a better performance compared to two other fixed-strategy defense systems. 
    more » « less
  4. In the past few decades, numerous experiments have shown that humans do not always behave so as to maximize their material payoff. Cooperative behavior when noncooperation is a dominant strategy (with respect to the material payoffs) is particularly puzzling. Here we propose a novel approach to explain cooperation, assuming what Halpern and Pass call translucent players. Typically, players are assumed to be opaque, in the sense that a deviation by one player in a normal-form game does not affect the strategies used by other players. However, a player may believe that if he switches from one strategy to another, the fact that he chooses to switch may be visible to the other players. For example, if he chooses to defect in Prisoner’s Dilemma, the other player may sense his guilt. We show that by assuming translucent players, we can recover many of the regularities observed in human behavior in well-studied games such as Prisoner’s Dilemma, Traveler’s Dilemma, Bertrand Competition, and the Public Goods game. The approach can also be extended to take into account a player’s concerns that his social group (or God) may observe his actions. This extension helps explain prosocial behavior in situations in which previous models of social behavior fail to make correct predictions (e.g. conflict situations and situations where there is a trade-off between equity and efficiency). 
    more » « less
  5. When learning in strategic environments, a key question is whether agents can overcome uncertainty about their preferences to achieve outcomes they could have achieved absent any uncertainty. Can they do this solely through interactions with each other? We focus this question on the ability of agents to attain the value of their Stackelberg optimal strategy and study the impact of information asymmetry. We study repeated interactions in fully strategic environments where players' actions are decided based on learning algorithms that take into account their observed histories and knowledge of the game. We study the pure Nash equilibria (PNE) of a meta-game where players choose these algorithms as their actions. We demonstrate that if one player has perfect knowledge about the game, then any initial informational gap persists. That is, while there is always a PNE in which the informed agent achieves her Stackelberg value, there is a game where no PNE of the meta-game allows the partially informed player to achieve her Stackelberg value. On the other hand, if both players start with some uncertainty about the game, the quality of information alone does not determine which agent can achieve her Stackelberg value. In this case, the concept of information asymmetry becomes nuanced and depends on the game's structure. Overall, our findings suggest that repeated strategic interactions alone cannot facilitate learning effectively enough to earn an uninformed player her Stackelberg value. 
    more » « less