skip to main content

Title: On the existence of Nash Equilibrium in games with resource-bounded players
We consider computational games, sequences of games G = (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that one-way functions exist, we prove that there is 2-player zero-sum computational game G such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no ε-Nash equilibrium if players are restricted to using strategies computable by polynomial-time Turing machines, where we use a notion of Nash equilibrium that is tailored to computational games. We also show that an ε-Nash equilibrium may not exist if players are constrained to perform at most T computational steps in each of the games in the sequence. On the other hand, we show that if players can use arbitrary Turing machines to compute their strategies, then every computational game has an ε-Nash equilibrium. These results may shed light on competitive settings where the availability of more running time or faster algorithms can lead to a more » “computational arms race”, precluding the existence of equilibrium. They also point to inherent limitations of concepts such as “best response” and Nash equilibrium in games with resource-bounded players. « less
Authors:
; ;
Award ID(s):
1703846
Publication Date:
NSF-PAR ID:
10156235
Journal Name:
Proceedings of the 12th International Symposium on A Game Theory (SAGT)}
Sponsoring Org:
National Science Foundation
More Like this
  1. A two-player finite game is represented by two payoff matrices (A, B), one for each player. Imitation games are a subclass of two-player games in which B is the identity matrix, implying that the second player gets a positive payoff only if she "imitates" the first. Given that the problem of computing a Nash equilibrium (NE) is known to be provably hard, even to approximate, we ask if it is any easier for imitation games. We show that much like the general case, for any c > 0, computing a 1 over n^c -approximate NE of imitation games remains PPAD-hard, where n is the number of moves available to the players. On the other hand, we design a polynomial-time algorithm to find ε-approximate NE for any given constant ε > 0 (PTAS). The former result also rules out the smooth complexity being in P, unless PPAD ⊂ RP.
  2. We analyze a class of stochastic dynamic games among teams with asymmetric information, where members of a team share their observations internally with a delay of d. Each team is associated with a controlled Markov Chain, whose dynamics are coupled through the players’ actions. These games exhibit challenges in both theory and practice due to the presence of signaling and the increasing domain of information over time. We develop a general approach to characterize a subset of Nash equilibria where the agents can use a compressed version of their information, instead of the full information, to choose their actions. We identify two subclasses of strategies: sufficient private information-Based (SPIB) strategies, which only compress private information, and compressed information-based (CIB) strategies, which compress both common and private information. We show that SPIB-strategy-based equilibria exist and the set of payoff profiles of such equilibria is the same as that of all Nash equilibria. On the other hand, we show that CIB-strategy-based equilibria may not exist. We develop a backward inductive sequential procedure, whose solution (if it exists) provides a CIB strategy-based equilibrium. We identify some instances where we can guarantee the existence of a solution to the above procedure. Our results highlightmore »the tension among compression of information, ability of compression-based strategies to sustain all or some of the equilibrium payoff profiles, and backward inductive sequential computation of equilibria in stochastic dynamic games with asymmetric information.« less
  3. The theory of mean field games is a tool to understand noncooperative dynamic stochastic games with a large number of players. Much of the theory has evolved under conditions ensuring uniqueness of the mean field game Nash equilibrium. However, in some situations, typically involving symmetry breaking, non-uniqueness of solutions is an essential feature. To investigate the nature of non-unique solutions, this paper focuses on the technically simple setting where players have one of two states, with continuous time dynamics, and the game is symmetric in the players, and players are restricted to using Markov strategies. All the mean field game Nash equilibria are identified for a symmetric follow the crowd game. Such equilibria correspond to symmetric $\epsilon$-Nash Markov equilibria for $N$ players with $\epsilon$ converging to zero as $N$ goes to infinity. In contrast to the mean field game, there is a unique Nash equilibrium for finite $N.$ It is shown that fluid limits arising from the Nash equilibria for finite $N$ as $N$ goes to infinity are mean field game Nash equilibria, and evidence is given supporting the conjecture that such limits, among all mean field game Nash equilibria, are the ones that are stable fixed points of themore »mean field best response mapping.« less
  4. We study the following problem, which to our knowledge has been addressed only partially in the literature and not in full generality. An agent observes two players play a zero-sum game that is known to the players but not the agent. The agent observes the actions and state transitions of their game play, but not rewards. The players may play either op-timally (according to some Nash equilibrium) or according to any other solution concept, such as a quantal response equilibrium. Following these observations, the agent must recommend a policy for one player, say Player 1. The goal is to recommend a policy that is minimally exploitable un-der the true, but unknown, game. We take a Bayesian ap-proach. We establish a likelihood function based on obser-vations and the specified solution concept. We then propose an approach based on Markov chain Monte Carlo (MCMC), which allows us to approximately sample games from the agent’s posterior belief distribution. Once we have a batch of independent samples from the posterior, we use linear pro-gramming and backward induction to compute a policy for Player 1 that minimizes the sum of exploitabilities over these games. This approximates the policy that minimizes the ex-pected exploitability under themore »full distribution. Our approach is also capable of handling counterfactuals, where known modifications are applied to the unknown game. We show that our Bayesian MCMC-based technique outperforms two other techniques—one based on the equilibrium policy of the maximum-probability game and the other based on imitation of observed behavior—on all the tested stochastic game envi-ronments.« less
  5. In the Colonel Blotto game, which was initially introduced by Borel in 1921, two colonels simultaneously distribute their troops across different battlefields. The winner of each battlefield is determined independently by a winner-takes-all rule. The ultimate payoff for each colonel is the number of battlefields won. The Colonel Blotto game is commonly used for analyzing a wide range of applications from the U.S. Presidential election to innovative technology competitions to advertising, sports, and politics. There are persistent efforts to find the optimal strategies for the Colonel Blotto game. However, the first polynomial-time algorithm for that has very recently been provided by Ahmadinejad, Dehghani, Hajiaghayi, Lucier, Mahini, and Seddighin. Their algorithm consists of an exponential size linear program (LP), which they solve using the ellipsoid method. Because of the use of the ellipsoid method, despite its significant theoretical importance, this algorithm is highly impractical. In general, even the simplex method (despite its exponential running time in practice) performs better than the ellipsoid method in practice. In this paper, we provide the first polynomial-size LP formulation of the optimal strategies for the Colonel Blotto game using linear extension techniques. Roughly speaking, we consider the natural representation of the strategy space polytope andmore »transform it to a higher dimensional strategy space, which interestingly has exponentially fewer facets. In other words, we add a few variables to the LP such that, surprisingly, the number of constraints drops down to a polynomial. We use this polynomial-size LP to provide a simpler and significantly faster algorithm for finding optimal strategies of the Colonel Blotto game. We further show this representation is asymptotically tight, which means there exists no other linear representation of the strategy space with fewer constraints. We also extend our approach to multidimensional Colonel Blotto games, in which players may have different sorts of budgets, such as money, time, human resources, etc. By implementing this algorithm, we are able to run tests that were previously impossible to solve in a reasonable time. This information allows us to observe some interesting properties of Colonel Blotto; for example, we find out the behavior of players in the discrete model is very similar to the continuous model Roberson solved.« less