Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large
Deep fictitious play for stochastic differential games
In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric Nplayer nonzerosum stochastic differential games, for which we refer as deep fictitious play, a multistage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and modelfree, i.e., using GPU parallelization, it can be applied to any Nplayer stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closedform solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an openloop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closedloop more »
 Editors:
 Jin, Shi
 Award ID(s):
 1953035
 Publication Date:
 NSFPAR ID:
 10253641
 Journal Name:
 Communications in mathematical sciences
 Volume:
 19
 Issue:
 2
 Page Range or eLocationID:
 325353
 ISSN:
 15396746
 Sponsoring Org:
 National Science Foundation
More Like this

player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into\begin{document}$ N $\end{document} suboptimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an\begin{document}$ N $\end{document} Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.\begin{document}$ \epsilon $\end{document} 
We propose a deep neural networkbased algorithm to identify the Markovian Nash equilibrium of general large 𝑁player stochastic differential games. Following the idea of fictitious play, we recast the 𝑁player game into 𝑁 decoupled decision problems (one for each player) and solve them iteratively. The individual decision problem is characterized by a semilinear HamiltonJacobiBellman equation, to solve which we employ the recently developed deep BSDE method. The resulted algorithm can solve large 𝑁player games for which conventional numerical methods would suffer from the curse of dimensionality. Multiple numerical examples involving identical or heterogeneous agents, with riskneutral or risksensitive objectives, are tested to validate the accuracy of the proposed algorithm in large group games. Even for a fiftyplayer game with the presence of common noise, the proposed algorithm still finds the approximate Nash equilibrium accurately, which, to our best knowledge, is difficult to achieve by other numerical algorithms.

We study the following problem, which to our knowledge has been addressed only partially in the literature and not in full generality. An agent observes two players play a zerosum game that is known to the players but not the agent. The agent observes the actions and state transitions of their game play, but not rewards. The players may play either optimally (according to some Nash equilibrium) or according to any other solution concept, such as a quantal response equilibrium. Following these observations, the agent must recommend a policy for one player, say Player 1. The goal is to recommend a policy that is minimally exploitable under the true, but unknown, game. We take a Bayesian approach. We establish a likelihood function based on observations and the specified solution concept. We then propose an approach based on Markov chain Monte Carlo (MCMC), which allows us to approximately sample games from the agent’s posterior belief distribution. Once we have a batch of independent samples from the posterior, we use linear programming and backward induction to compute a policy for Player 1 that minimizes the sum of exploitabilities over these games. This approximates the policy that minimizes the expected exploitability under themore »

We consider computational games, sequences of games G = (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that oneway functions exist, we prove that there is 2player zerosum computational game G such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no εNash equilibrium if players are restricted to using strategies computable by polynomialtime Turing machines, where we use a notion of Nash equilibrium that is tailored to computational games. We also show that an εNash equilibrium may not exist if players are constrained to perform at most T computational steps in each of the games in the sequence. On the other hand, we show that if players can use arbitrary Turing machines to compute their strategies, then every computational game has an εNash equilibrium. These results may shed light on competitive settings where the availability of more running time or faster algorithms can lead to amore »

The existence of simple, uncoupled noregret dynamics that converge to correlated equilibria in normalform games is a celebrated result in the theory of multiagent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normalform game, the empirical frequency of play converges to a normalform correlated equilibrium. Extensiveform (that is, treeform) games generalize normalform games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensiveform correlation has significantly different properties than the normalform counterpart, many of which are still open research directions. Extensiveform correlated equilibrium (EFCE) has been proposed as the natural extensiveform counterpart to normalform correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled noregret dynamics that converge to the set of EFCEs in nplayer generalsum extensiveform games with perfect recall. First, we introduce a notion of trigger regret in extensiveform games, which extends that of internal regret in normalform games. When each player has low trigger regret, the empirical frequency of playmore »