skip to main content

Title: Convergence of deep fictitious play for stochastic differential games

Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \begin{document}$ N $\end{document}-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into \begin{document}$ N $\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \begin{document}$ \epsilon $\end{document}-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.

Authors:
; ;
Award ID(s):
1953035
Publication Date:
NSF-PAR ID:
10343586
Journal Name:
Frontiers of Mathematical Finance
Volume:
1
Issue:
2
Page Range or eLocation-ID:
287
ISSN:
2769-6715
Sponsoring Org:
National Science Foundation
More Like this
  1. Jin, Shi (Ed.)
    In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric N-player non-zero-sum stochastic differential games, for which we refer as deep fictitious play, a multi-stage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and model-free, i.e., using GPU parallelization, it can be applied to any N-player stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closed-form solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an open-loop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closed-loopmore »Nash equilibrium in the end.« less
  2. For any finite horizon Sinai billiard map \begin{document}$ T $\end{document} on the two-torus, we find \begin{document}$ t_*>1 $\end{document} such that for each \begin{document}$ t\in (0,t_*) $\end{document} there exists a unique equilibrium state \begin{document}$ \mu_t $\end{document} for \begin{document}$ - t\log J^uT $\end{document}, and \begin{document}$ \mu_t $\end{document} is \begin{document}$ T $\end{document}-adapted. (In particular, the SRB measure is the unique equilibrium state for \begin{document}$ - \log J^uT $\end{document}.) We show that \begin{document}$ \mu_t $\end{document} is exponentially mixing for Hölder observables, and the pressure function \begin{document}$ P(t) = \sup_\mu \{h_\mu -\int t\log J^uT d \mu\} $\end{document} is analytic on \begin{document}$ (0,t_*) $\end{document}. In addition, \begin{document}$ P(t) $\end{document} is strictly convex if and only if \begin{document}$ \log J^uT $\end{document} is not \begin{document}$ \mu_t $\end{document}-a.e. cohomologous to a constant, while, if there exist \begin{document}$ t_a\ne t_b $\end{document} with \begin{document}$ \mu_{t_a} = \mu_{t_b} $\end{document}, then \begin{document}$ P(t) $\end{document} is affine on \begin{document}$ (0,t_*) $\end{document}. An additional sparse recurrence condition gives \begin{document}$ \lim_{t\downarrow 0} P(t) = P(0) $\end{document}.

  3. In this note we study a new class of alignment models with self-propulsion and Rayleigh-type friction forces, which describes the collective behavior of agents with individual characteristic parameters. We describe the long time dynamics via a new method which allows us to reduce analysis from the multidimensional system to a simpler family of two-dimensional systems parametrized by a proper Grassmannian. With this method we demonstrate exponential alignment for a large (and sharp) class of initial velocity configurations confined to a sector of opening less than \begin{document}$ \pi $\end{document}.

    In the case when characteristic parameters remain frozen, the system governs dynamics of opinions for a set of players with constant convictions. Viewed as a dynamical non-cooperative game, the system is shown to possess a unique stable Nash equilibrium, which represents a settlement of opinions most agreeable to all agents. Such an agreement is furthermore shown to be a global attractor for any set of initial opinions.

  4. The purpose of this paper is to describe the feedback particle filter algorithm for problems where there are a large number (\begin{document}$ M $\end{document}) of non-interacting agents (targets) with a large number (\begin{document}$ M $\end{document}) of non-agent specific observations (measurements) that originate from these agents. In its basic form, the problem is characterized by data association uncertainty whereby the association between the observations and agents must be deduced in addition to the agent state. In this paper, the large-\begin{document}$ M $\end{document} limit is interpreted as a problem of collective inference. This viewpoint is used to derive the equation for the empirical distribution of the hidden agent states. A feedback particle filter (FPF) algorithm for this problem is presented and illustrated via numerical simulations. Results are presented for the Euclidean and the finite state-space cases, both in continuous-time settings. The classical FPF algorithm is shown to be the special case (with \begin{document}$ M = 1 $\end{document}) of these more general results. The simulations help show that the algorithm well approximates the empirical distribution of the hidden states for large \begin{document}$ M $\end{document}.

  5. Genetic variations in the COVID-19 virus are one of the main causes of the COVID-19 pandemic outbreak in 2020 and 2021. In this article, we aim to introduce a new type of model, a system coupled with ordinary differential equations (ODEs) and measure differential equation (MDE), stemming from the classical SIR model for the variants distribution. Specifically, we model the evolution of susceptible \begin{document}$ S $\end{document} and removed \begin{document}$ R $\end{document} populations by ODEs and the infected \begin{document}$ I $\end{document} population by a MDE comprised of a probability vector field (PVF) and a source term. In addition, the ODEs for \begin{document}$ S $\end{document} and \begin{document}$ R $\end{document} contains terms that are related to the measure \begin{document}$ I $\end{document}. We establish analytically the well-posedness of the coupled ODE-MDE system by using generalized Wasserstein distance. We give two examples to show that the proposed ODE-MDE model coincides with the classical SIR model in case of constant or time-dependent parameters as special cases.