The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.
Title: On the existence of Nash Equilibrium in games with resource-bounded players
We consider computational games, sequences of games G = (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that one-way functions exist, we prove that there is 2-player zero-sum computational game G such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no ε-Nash equilibrium if players are restricted to using strategies computable by polynomial-time Turing machines, where we use a notion of Nash equilibrium that is tailored to computational games. We also show that an ε-Nash equilibrium may not exist if players are constrained to perform at most T computational steps in each of the games in the sequence. On the other hand, we show that if players can use arbitrary Turing machines to compute their strategies, then every computational game has an ε-Nash equilibrium. These results may shed light on competitive settings where the availability of more running time or faster algorithms can lead to a “computational arms race”, precluding the existence of equilibrium. They also point to inherent limitations of concepts such as “best response” and Nash equilibrium in games with resource-bounded players. more »« less
A two-player finite game is represented by two payoff matrices (A, B), one for each player. Imitation games are a subclass of two-player games in which B is the identity matrix, implying that the second player gets a positive payoff only if she "imitates" the first. Given that the problem of computing a Nash equilibrium (NE) is known to be provably hard, even to approximate, we ask if it is any easier for imitation games. We show that much like the general case, for any c > 0, computing a 1 over n^c -approximate NE of imitation games remains PPAD-hard, where n is the number of moves available to the players. On the other hand, we design a polynomial-time algorithm to find ε-approximate NE for any given constant ε > 0 (PTAS). The former result also rules out the smooth complexity being in P, unless PPAD ⊂ RP.
Hajek, Bruce; Livesay, Michael(
, 2019 58th Conference on Decision and Control)
The theory of mean field games is a tool to
understand noncooperative dynamic stochastic games
with a large number of players. Much of the theory has evolved
under conditions ensuring uniqueness of the mean field
game Nash equilibrium. However, in some situations, typically
involving symmetry breaking, non-uniqueness of solutions
is an essential feature. To investigate the nature of non-unique
solutions, this paper focuses on the technically simple setting where
players have one of two states, with continuous time dynamics, and
the game is symmetric in the players, and players are restricted
to using Markov strategies. All the mean field
game Nash equilibria are identified for a symmetric follow the crowd game.
Such equilibria correspond to symmetric $\epsilon$-Nash Markov
equilibria for $N$ players with $\epsilon$
converging to zero as $N$ goes to infinity.
In contrast to the mean field game, there is a unique Nash equilibrium
for finite $N.$ It is shown that fluid limits arising from the
Nash equilibria for finite $N$ as $N$ goes to infinity are mean field
game Nash equilibria, and evidence is given supporting
the conjecture that such limits, among all mean field game Nash equilibria,
are the ones that are stable fixed points of the mean field best response mapping.
Daskalakis, Constantinos; Golowich, Noah; Haghtalab, Nika; Shetty, Abhishek(
, Leibniz International Proceedings in Informatics (LIPIcs):15th Innovations in Theoretical Computer Science Conference (ITCS 2024))
Guruswami, Venkatesan
(Ed.)
A fundamental shortcoming of the concept of Nash equilibrium is its computational intractability: approximating Nash equilibria in normal-form games is PPAD-hard. In this paper, inspired by the ideas of smoothed analysis, we introduce a relaxed variant of Nash equilibrium called σ-smooth Nash equilibrium, for a {smoothness parameter} σ. In a σ-smooth Nash equilibrium, players only need to achieve utility at least as high as their best deviation to a σ-smooth strategy, which is a distribution that does not put too much mass (as parametrized by σ) on any fixed action. We distinguish two variants of σ-smooth Nash equilibria: strong σ-smooth Nash equilibria, in which players are required to play σ-smooth strategies under equilibrium play, and weak σ-smooth Nash equilibria, where there is no such requirement.
We show that both weak and strong σ-smooth Nash equilibria have superior computational properties to Nash equilibria: when σ as well as an approximation parameter ϵ and the number of players are all constants, there is a {constant-time} randomized algorithm to find a weak ϵ-approximate σ-smooth Nash equilibrium in normal-form games. In the same parameter regime, there is a polynomial-time deterministic algorithm to find a strong ϵ-approximate σ-smooth Nash equilibrium in a normal-form game. These results stand in contrast to the optimal algorithm for computing ϵ-approximate Nash equilibria, which cannot run in faster than quasipolynomial-time, subject to complexity-theoretic assumptions. We complement our upper bounds by showing that when either σ or ϵ is an inverse polynomial, finding a weak ϵ-approximate σ-smooth Nash equilibria becomes computationally intractable.
Our results are the first to propose a variant of Nash equilibrium which is computationally tractable, allows players to act independently, and which, as we discuss, is justified by an extensive line of work on individual choice behavior in the economics literature.
Myerson, Roger B.; Reny, Philip J.(
, Econometrica)
We extend Kreps and Wilson's concept of sequential equilibrium to games with infinite sets of signals and actions. A strategy profile is a conditional ε ‐equilibrium if, for any of a player's positive probability signal events, his conditional expected utility is within ε of the best that he can achieve by deviating. With topologies on action sets, a conditional ε ‐equilibrium is full if strategies give every open set of actions positive probability. Such full conditional ε ‐equilibria need not be subgame perfect, so we consider a non‐topological approach. Perfect conditional ε ‐equilibria are defined by testing conditional ε ‐rationality along nets of small perturbations of the players' strategies and of nature's probability function that, for any action and for almost any state, make this action and state eventually (in the net) always have positive probability. Every perfect conditional ε ‐equilibrium is a subgame perfect ε ‐equilibrium, and, in finite games, limits of perfect conditional ε ‐equilibria as ε → 0 are sequential equilibrium strategy profiles. But limit strategies need not exist in infinite games so we consider instead the limit distributions over outcomes. We call such outcome distributions perfect conditional equilibrium distributions and establish their existence for a large class of regular projective games. Nature's perturbations can produce equilibria that seem unintuitive and so we augment the game with a net of permissible perturbations.
We study the sample complexity of identifying an approximate equilibrium for two-player
zero-sum n × 2 matrix games. That is, in a sequence of repeated game plays, how many
rounds must the two players play before reaching an approximate equilibrium (e.g., Nash)? We
derive instance-dependent bounds that define an ordering over game matrices that captures
the intuition that the dynamics of some games converge faster than others. Specifically, we
consider a stochastic observation model such that when the two players choose actions i and
j, respectively, they both observe each other’s played actions and a stochastic observation Xij
such that E [Xij ] = Aij . To our knowledge, our work is the first case of instance-dependent
lower bounds on the number of rounds the players must play before reaching an approximate
equilibrium in the sense that the number of rounds depends on the specific properties of the
game matrix A as well as the desired accuracy. We also prove a converse statement: there exist
player strategies that achieve this lower bound.
Halpern, Joseph Y, Pass, Rafael, and Reichman, Daniel. On the existence of Nash Equilibrium in games with resource-bounded players. Retrieved from https://par.nsf.gov/biblio/10156235. Proceedings of the 12th International Symposium on A Game Theory (SAGT)} .
Halpern, Joseph Y, Pass, Rafael, & Reichman, Daniel. On the existence of Nash Equilibrium in games with resource-bounded players. Proceedings of the 12th International Symposium on A Game Theory (SAGT)}, (). Retrieved from https://par.nsf.gov/biblio/10156235.
Halpern, Joseph Y, Pass, Rafael, and Reichman, Daniel.
"On the existence of Nash Equilibrium in games with resource-bounded players". Proceedings of the 12th International Symposium on A Game Theory (SAGT)} (). Country unknown/Code not available. https://par.nsf.gov/biblio/10156235.
@article{osti_10156235,
place = {Country unknown/Code not available},
title = {On the existence of Nash Equilibrium in games with resource-bounded players},
url = {https://par.nsf.gov/biblio/10156235},
abstractNote = {We consider computational games, sequences of games G = (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that one-way functions exist, we prove that there is 2-player zero-sum computational game G such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no ε-Nash equilibrium if players are restricted to using strategies computable by polynomial-time Turing machines, where we use a notion of Nash equilibrium that is tailored to computational games. We also show that an ε-Nash equilibrium may not exist if players are constrained to perform at most T computational steps in each of the games in the sequence. On the other hand, we show that if players can use arbitrary Turing machines to compute their strategies, then every computational game has an ε-Nash equilibrium. These results may shed light on competitive settings where the availability of more running time or faster algorithms can lead to a “computational arms race”, precluding the existence of equilibrium. They also point to inherent limitations of concepts such as “best response” and Nash equilibrium in games with resource-bounded players.},
journal = {Proceedings of the 12th International Symposium on A Game Theory (SAGT)}},
author = {Halpern, Joseph Y and Pass, Rafael and Reichman, Daniel},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.