skip to main content


Title: Deep fictitious play for stochastic differential games
In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric N-player non-zero-sum stochastic differential games, for which we refer as deep fictitious play, a multi-stage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and model-free, i.e., using GPU parallelization, it can be applied to any N-player stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closed-form solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an open-loop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closed-loop Nash equilibrium in the end.  more » « less
Award ID(s):
1953035
NSF-PAR ID:
10253641
Author(s) / Creator(s):
Editor(s):
Jin, Shi
Date Published:
Journal Name:
Communications in mathematical sciences
Volume:
19
Issue:
2
ISSN:
1539-6746
Page Range / eLocation ID:
325-353
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \begin{document}$ N $\end{document}-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into \begin{document}$ N $\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \begin{document}$ \epsilon $\end{document}-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.

     
    more » « less
  2. null (Ed.)
    We propose a deep neural network-based algorithm to identify the Markovian Nash equilibrium of general large 𝑁-player stochastic differential games. Following the idea of fictitious play, we recast the 𝑁-player game into 𝑁 decoupled decision problems (one for each player) and solve them iteratively. The individual decision problem is characterized by a semilinear Hamilton-Jacobi-Bellman equation, to solve which we employ the recently developed deep BSDE method. The resulted algorithm can solve large 𝑁-player games for which conventional numerical methods would suffer from the curse of dimensionality. Multiple numerical examples involving identical or heterogeneous agents, with risk-neutral or risk-sensitive objectives, are tested to validate the accuracy of the proposed algorithm in large group games. Even for a fifty-player game with the presence of common noise, the proposed algorithm still finds the approximate Nash equilibrium accurately, which, to our best knowledge, is difficult to achieve by other numerical algorithms. 
    more » « less
  3. Francisco Ruiz, Jennifer Dy (Ed.)
    We study the sample complexity of identifying an approximate equilibrium for two-player zero-sum n × 2 matrix games. That is, in a sequence of repeated game plays, how many rounds must the two players play before reaching an approximate equilibrium (e.g., Nash)? We derive instance-dependent bounds that define an ordering over game matrices that captures the intuition that the dynamics of some games converge faster than others. Specifically, we consider a stochastic observation model such that when the two players choose actions i and j, respectively, they both observe each other’s played actions and a stochastic observation Xij such that E [Xij ] = Aij . To our knowledge, our work is the first case of instance-dependent lower bounds on the number of rounds the players must play before reaching an approximate equilibrium in the sense that the number of rounds depends on the specific properties of the game matrix A as well as the desired accuracy. We also prove a converse statement: there exist player strategies that achieve this lower bound. 
    more » « less
  4. Markov games model interactions among multiple players in a stochastic, dynamic environment. Each player in a Markov game maximizes its expected total discounted reward, which depends upon the policies of the other players. We formulate a class of Markov games, termed affine Markov games, where an affine reward function couples the players’ actions. We introduce a novel solution concept, the soft-Bellman equilibrium, where each player is boundedly rational and chooses a soft-Bellman policy rather than a purely rational policy as in the well-known Nash equilibrium concept. We provide conditions for the existence and uniqueness of the soft-Bellman equilibrium and propose a nonlinear least-squares algorithm to compute such an equilibrium in the forward problem. We then solve the inverse game problem of inferring the players’ reward parameters from observed state-action trajectories via a projected-gradient algorithm. Experiments in a predator-prey OpenAI Gym environment show that the reward parameters inferred by the proposed algorithm outper- form those inferred by a baseline algorithm: they reduce the Kullback-Leibler divergence between the equilibrium policies and observed policies by at least two orders of magnitude. 
    more » « less
  5. We formulate a mean field game where each player stops a privately observed Brownian motion with absorption. Players are ranked according to their level of stopping and rewarded as a function of their relative rank. There is a unique mean field equilibrium, and it is shown to be the limit of associated n-player games. Conversely, the mean field strategy induces n-player ε-Nash equilibria for any continuous reward function—but not for discontinuous ones. In a second part, we study the problem of a principal who can choose how to distribute a reward budget over the ranks and aims to maximize the performance of the median player. The optimal reward design (contract) is found in closed form, complementing the merely partial results available in the n-player case. We then analyze the quality of the mean field design when used as a proxy for the optimizer in the n-player game. Surprisingly, the quality deteriorates dramatically as n grows. We explain this with an asymptotic singularity in the induced n-player equilibrium distributions. Funding: M. Nutz is supported by an Alfred P. Sloan Fellowship and the Division of Mathematical Sciences of the National Science Foundation [Grants DMS-1812661 and DMS-2106056]. Y. Zhang is supported in part by the Natural Sciences and Engineering Research Council of Canada [NSERC Discovery Grant RGPIN-2020-06290]. 
    more » « less