Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large
Deep fictitious play for stochastic differential games
In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric Nplayer nonzerosum stochastic differential games, for which we refer as deep fictitious play, a multistage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and modelfree, i.e., using GPU parallelization, it can be applied to any Nplayer stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closedform solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an openloop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closedloop Nash equilibrium in the end.
more »
« less
 Award ID(s):
 1953035
 NSFPAR ID:
 10253641
 Editor(s):
 Jin, Shi
 Date Published:
 Journal Name:
 Communications in mathematical sciences
 Volume:
 19
 Issue:
 2
 ISSN:
 15396746
 Page Range / eLocation ID:
 325353
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into\begin{document}$ N $\end{document} suboptimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an\begin{document}$ N $\end{document} Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.\begin{document}$ \epsilon $\end{document} 
null (Ed.)We propose a deep neural networkbased algorithm to identify the Markovian Nash equilibrium of general large 𝑁player stochastic differential games. Following the idea of fictitious play, we recast the 𝑁player game into 𝑁 decoupled decision problems (one for each player) and solve them iteratively. The individual decision problem is characterized by a semilinear HamiltonJacobiBellman equation, to solve which we employ the recently developed deep BSDE method. The resulted algorithm can solve large 𝑁player games for which conventional numerical methods would suffer from the curse of dimensionality. Multiple numerical examples involving identical or heterogeneous agents, with riskneutral or risksensitive objectives, are tested to validate the accuracy of the proposed algorithm in large group games. Even for a fiftyplayer game with the presence of common noise, the proposed algorithm still finds the approximate Nash equilibrium accurately, which, to our best knowledge, is difficult to achieve by other numerical algorithms.more » « less

Francisco Ruiz, Jennifer Dy (Ed.)We study the sample complexity of identifying an approximate equilibrium for twoplayer zerosum n × 2 matrix games. That is, in a sequence of repeated game plays, how many rounds must the two players play before reaching an approximate equilibrium (e.g., Nash)? We derive instancedependent bounds that define an ordering over game matrices that captures the intuition that the dynamics of some games converge faster than others. Specifically, we consider a stochastic observation model such that when the two players choose actions i and j, respectively, they both observe each other’s played actions and a stochastic observation Xij such that E [Xij ] = Aij . To our knowledge, our work is the first case of instancedependent lower bounds on the number of rounds the players must play before reaching an approximate equilibrium in the sense that the number of rounds depends on the specific properties of the game matrix A as well as the desired accuracy. We also prove a converse statement: there exist player strategies that achieve this lower bound.more » « less

Markov games model interactions among multiple players in a stochastic, dynamic environment. Each player in a Markov game maximizes its expected total discounted reward, which depends upon the policies of the other players. We formulate a class of Markov games, termed affine Markov games, where an affine reward function couples the players’ actions. We introduce a novel solution concept, the softBellman equilibrium, where each player is boundedly rational and chooses a softBellman policy rather than a purely rational policy as in the wellknown Nash equilibrium concept. We provide conditions for the existence and uniqueness of the softBellman equilibrium and propose a nonlinear leastsquares algorithm to compute such an equilibrium in the forward problem. We then solve the inverse game problem of inferring the players’ reward parameters from observed stateaction trajectories via a projectedgradient algorithm. Experiments in a predatorprey OpenAI Gym environment show that the reward parameters inferred by the proposed algorithm outper form those inferred by a baseline algorithm: they reduce the KullbackLeibler divergence between the equilibrium policies and observed policies by at least two orders of magnitude.more » « less

We formulate a mean field game where each player stops a privately observed Brownian motion with absorption. Players are ranked according to their level of stopping and rewarded as a function of their relative rank. There is a unique mean field equilibrium, and it is shown to be the limit of associated nplayer games. Conversely, the mean field strategy induces nplayer εNash equilibria for any continuous reward function—but not for discontinuous ones. In a second part, we study the problem of a principal who can choose how to distribute a reward budget over the ranks and aims to maximize the performance of the median player. The optimal reward design (contract) is found in closed form, complementing the merely partial results available in the nplayer case. We then analyze the quality of the mean field design when used as a proxy for the optimizer in the nplayer game. Surprisingly, the quality deteriorates dramatically as n grows. We explain this with an asymptotic singularity in the induced nplayer equilibrium distributions. Funding: M. Nutz is supported by an Alfred P. Sloan Fellowship and the Division of Mathematical Sciences of the National Science Foundation [Grants DMS1812661 and DMS2106056]. Y. Zhang is supported in part by the Natural Sciences and Engineering Research Council of Canada [NSERC Discovery Grant RGPIN202006290].more » « less