Regret minimization has proved to be a versatile tool for tree- form sequential decision making and extensive-form games. In large two-player zero-sum imperfect-information games, mod- ern extensions of counterfactual regret minimization (CFR) are currently the practical state of the art for computing a Nash equilibrium. Most regret-minimization algorithms for tree-form sequential decision making, including CFR, require (i) an exact model of the player’s decision nodes, observation nodes, and how they are linked, and (ii) full knowledge, at all times t, about the payoffs—even in parts of the decision space that are not encountered at time t. Recently, there has beenmore »
XDO: A Double Oracle Algorithm for Extensive-Form Games
Policy Space Response Oracles (PSRO) is a reinforcement learning (RL) algo-
rithm for two-player zero-sum games that has been empirically shown to find
approximate Nash equilibria in large games. Although PSRO is guaranteed to
converge to an approximate Nash equilibrium and can handle continuous actions,
it may take an exponential number of iterations as the number of information
states (infostates) grows. We propose Extensive-Form Double Oracle (XDO), an
extensive-form double oracle algorithm for two-player zero-sum games that is guar-
anteed to converge to an approximate Nash equilibrium linearly in the number of
infostates. Unlike PSRO, which mixes best responses at the root of the game, XDO
mixes best responses at every infostate. We also introduce Neural XDO (NXDO),
where the best response is learned through deep RL. In tabular experiments on
Leduc poker, we find that XDO achieves an approximate Nash equilibrium in a
number of iterations an order of magnitude smaller than PSRO. Experiments on
a modified Leduc poker game and Oshi-Zumo show that tabular XDO achieves
a lower exploitability than CFR with the same amount of computation. We also
find that NXDO outperforms PSRO and NFSP on a sequential multidimensional
continuous-action game. NXDO is the first deep RL method that can find an
approximate Nash equilibrium in high-dimensional continuous-action sequential
games. Experiment code is available at more »
- Award ID(s):
- 1839429
- Publication Date:
- NSF-PAR ID:
- 10313270
- Journal Name:
- Advances in neural information processing systems
- ISSN:
- 1049-5258
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The existence of simple uncoupled no-regret learning dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form games generalize normal-form games by modeling both sequential and simultaneous moves, as well as imperfect information. Because of the sequential nature and presence of private information in the game, correlation in extensive-form games possesses significantly different propertiesmore »
-
challenging when the number of information states is large. Policy Space Response Oracles (PSRO) is a deep reinforcement learning algorithm grounded in game theory that is guaranteed to converge to an approximate Nash equilibrium. However, PSRO requires training a reinforcement learning policy at each iteration, making it too slow for large games. We show through counterexamples and experiments that DCH and Rectified PSRO, two existing approaches to scaling up PSRO, fail to converge even in small games. We introduce Pipeline PSRO (P2SRO), the first scalable PSRO-based method for finding approximate Nash equilibria in large zero-sum imperfect-information games. P2SRO is ablemore »
-
We consider computational games, sequences of games G = (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that one-way functions exist, we prove that there is 2-player zero-sum computational game G such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no ε-Nash equilibrium if players aremore »
-
The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different propertiesmore »