The existence of simple, uncoupled noregret dynamics that converge to correlated equilibria in normalform games is a celebrated result in the theory of multiagent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normalform game, the empirical frequency of play converges to a normalform correlated equilibrium. Extensiveform (that is, treeform) games generalize normalform games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensiveform correlation has significantly different properties than the normalform counterpart, many of which are still open research directions. Extensiveform correlated equilibrium (EFCE) has been proposed as the natural extensiveform counterpart to normalform correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled noregret dynamics that converge to the set of EFCEs in nplayer generalsum extensiveform games with perfect recall. First, we introduce a notion of trigger regret in extensiveform games, which extends that of internal regret in normalform games. When each player has low trigger regret, the empirical frequency of playmore »
This content will become publicly available on June 29, 2023
Learning ZeroSum SimultaneousMove Markov Games Using Function Approximation and Correlated Equilibrium
We develop provably efficient reinforcement learning algorithms for twoplayer zerosum finitehorizon Markov games with simultaneous moves. To incorporate function approximation, we consider a family of Markov games where the reward function and transition kernel possess a linear structure. Both the offline and online settings of the problems are considered. In the offline setting, we control both players and aim to find the Nash equilibrium by minimizing the duality gap. In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret. For both settings, we propose an optimistic variant of the leastsquares minimax value iteration algorithm. We show that our algorithm is computationally efficient and provably achieves an [Formula: see text] upper bound on the duality gap and regret, where d is the linear dimension, H the horizon and T the total number of timesteps. Our results do not require additional assumptions on the sampling model. Our setting requires overcoming several new challenges that are absent in Markov decision processes or turnbased Markov games. In particular, to achieve optimism with simultaneous moves, we construct both upper and lower confidence bounds of the value function, and then compute the optimistic policy by solving more »
 Award ID(s):
 1955997
 Publication Date:
 NSFPAR ID:
 10381248
 Journal Name:
 Mathematics of Operations Research
 ISSN:
 0364765X
 Sponsoring Org:
 National Science Foundation
More Like this


The existence of simple, uncoupled noregret dynamics that converge to correlated equilibria in normalform games is a celebrated result in the theory of multiagent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normalform game, the empirical frequency of play converges to a normalform correlated equilibrium. Extensiveform (that is, treeform) games generalize normalform games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensiveform correlation has significantly different properties than the normalform counterpart, many of which are still open research directions. Extensiveform correlated equilibrium (EFCE) has been proposed as the natural extensiveform counterpart to normalform correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled noregret dynamics that converge to the set of EFCEs in nplayer generalsum extensiveform games with perfect recall. First, we introduce a notion of trigger regret in extensiveform games, which extends that of internal regret in normalform games. When each player has low trigger regret, the empirical frequency of playmore »

The existence of simple uncoupled noregret learning dynamics that converge to correlated equilibria in normalform games is a celebrated result in the theory of multiagent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normalform game, the empirical frequency of play converges to a normalform correlated equilibrium. Extensiveform games generalize normalform games by modeling both sequential and simultaneous moves, as well as imperfect information. Because of the sequential nature and presence of private information in the game, correlation in extensiveform games possesses significantly different properties than its counterpart in normalform games, many of which are still open research directions. Extensiveform correlated equilibrium (EFCE) has been proposed as the natural extensiveform counterpart to the classical notion of correlated equilibrium in normalform games. Compared to the latter, the constraints that define the set of EFCEs are significantly more complex, as the correlation device must keep into account the evolution of beliefs of each player as they make observations throughout the game. Due to that significant added complexity, the existence of uncoupled learning dynamics leading to an EFCE has remained a challenging open research question for a longmore »

Regret minimization has proved to be a versatile tool for tree form sequential decision making and extensiveform games. In large twoplayer zerosum imperfectinformation games, mod ern extensions of counterfactual regret minimization (CFR) are currently the practical state of the art for computing a Nash equilibrium. Most regretminimization algorithms for treeform sequential decision making, including CFR, require (i) an exact model of the player’s decision nodes, observation nodes, and how they are linked, and (ii) full knowledge, at all times t, about the payoffs—even in parts of the decision space that are not encountered at time t. Recently, there has been growing interest towards relaxing some of those restric tions and making regret minimization applicable to settings for which reinforcement learning methods have traditionally been used—for example, those in which only blackbox access to the environment is available. We give the first, to our knowl edge, regretminimization algorithm that guarantees sublinear regret with high probability even when requirement (i)—and thus also (ii)—is dropped. We formalize an online learning setting in which the strategy space is not known to the agent and gets revealed incrementally whenever the agent encoun ters new decision points. We give an efficient algorithm that achieves O(T 3/4)more »

Nonzero sum games typically have multiple Nash equilibriums (or no equilibrium), and unlike the zerosum case, they may have different values at different equilibriums. Instead of focusing on the existence of individual equilibriums, we study the set of values over all equilibriums, which we call the set value of the game. The set value is unique by nature and always exists (with possible value [Formula: see text]). Similar to the standard value function in control literature, it enjoys many nice properties, such as regularity, stability, and more importantly, the dynamic programming principle. There are two main features in order to obtain the dynamic programming principle: (i) we must use closedloop controls (instead of openloop controls); and (ii) we must allow for path dependent controls, even if the problem is in a statedependent (Markovian) setting. We shall consider both discrete and continuous time models with finite time horizon. For the latter, we will also provide a duality approach through certain standard PDE (or pathdependent PDE), which is quite efficient for numerically computing the set value of the game.