Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large
 Award ID(s):
 1953035
 Publication Date:
 NSFPAR ID:
 10343586
 Journal Name:
 Frontiers of Mathematical Finance
 Volume:
 1
 Issue:
 2
 Page Range or eLocationID:
 287
 ISSN:
 27696715
 Sponsoring Org:
 National Science Foundation
More Like this

Jin, Shi (Ed.)In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric Nplayer nonzerosum stochastic differential games, for which we refer as deep fictitious play, a multistage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and modelfree, i.e., using GPU parallelization, it can be applied to any Nplayer stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closedform solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an openloop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closedloopmore »

We consider the wellknown LiebLiniger (LL) model for
bosons interacting pairwise on the line via the\begin{document}$ N $\end{document} potential in the meanfield scaling regime. Assuming suitable asymptotic factorization of the initial wave functions and convergence of the microscopic energy per particle, we show that the timedependent reduced density matrices of the system converge in trace norm to the pure states given by the solution to the onedimensional cubic nonlinear Schrödinger equation (NLS) with an explict rate of convergence. In contrast to previous work [\begin{document}$ \delta $\end{document} 3 ] relying on the formalism of second quantization and coherent states and without an explicit rate, our proof is based on the counting method of Pickl [65 ,66 ,67 ] and Knowles and Pickl [44 ]. To overcome difficulties stemming from the singularity of the potential, we introduce a new shortrange approximation argument that exploits the Hölder continuity of the\begin{document}$ \delta $\end{document} body wave function in a single particle variable. By further exploiting the\begin{document}$ N $\end{document} subcritical wellposedness theory for the 1D cubic NLS, we can prove meanfield convergence when the limiting solution to the NLS has finitemore »\begin{document}$ L^2 $\end{document} 
For any finite horizon Sinai billiard map
on the twotorus, we find\begin{document}$ T $\end{document} such that for each\begin{document}$ t_*>1 $\end{document} there exists a unique equilibrium state\begin{document}$ t\in (0,t_*) $\end{document} for\begin{document}$ \mu_t $\end{document} , and\begin{document}$  t\log J^uT $\end{document} is\begin{document}$ \mu_t $\end{document} adapted. (In particular, the SRB measure is the unique equilibrium state for\begin{document}$ T $\end{document} .) We show that\begin{document}$  \log J^uT $\end{document} is exponentially mixing for Hölder observables, and the pressure function\begin{document}$ \mu_t $\end{document} is analytic on\begin{document}$ P(t) = \sup_\mu \{h_\mu \int t\log J^uT d \mu\} $\end{document} . In addition,\begin{document}$ (0,t_*) $\end{document} is strictly convex if and only if\begin{document}$ P(t) $\end{document} is not\begin{document}$ \log J^uT $\end{document} a.e. cohomologous to a constant, while, if there exist\begin{document}$ \mu_t $\end{document} with\begin{document}$ t_a\ne t_b $\end{document} , then\begin{document}$ \mu_{t_a} = \mu_{t_b} $\end{document} is affine on\begin{document}$ P(t) $\end{document} . An additional sparse recurrence condition gives\begin{document}$ (0,t_*) $\end{document} .\begin{document}$ \lim_{t\downarrow 0} P(t) = P(0) $\end{document} 
In this note we study a new class of alignment models with selfpropulsion and Rayleightype friction forces, which describes the collective behavior of agents with individual characteristic parameters. We describe the long time dynamics via a new method which allows us to reduce analysis from the multidimensional system to a simpler family of twodimensional systems parametrized by a proper Grassmannian. With this method we demonstrate exponential alignment for a large (and sharp) class of initial velocity configurations confined to a sector of opening less than
.\begin{document}$ \pi $\end{document} In the case when characteristic parameters remain frozen, the system governs dynamics of opinions for a set of players with constant convictions. Viewed as a dynamical noncooperative game, the system is shown to possess a unique stable Nash equilibrium, which represents a settlement of opinions most agreeable to all agents. Such an agreement is furthermore shown to be a global attractor for any set of initial opinions.

The purpose of this paper is to describe the feedback particle filter algorithm for problems where there are a large number (
) of noninteracting agents (targets) with a large number (\begin{document}$ M $\end{document} ) of nonagent specific observations (measurements) that originate from these agents. In its basic form, the problem is characterized by data association uncertainty whereby the association between the observations and agents must be deduced in addition to the agent state. In this paper, the large\begin{document}$ M $\end{document} limit is interpreted as a problem of collective inference. This viewpoint is used to derive the equation for the empirical distribution of the hidden agent states. A feedback particle filter (FPF) algorithm for this problem is presented and illustrated via numerical simulations. Results are presented for the Euclidean and the finite statespace cases, both in continuoustime settings. The classical FPF algorithm is shown to be the special case (with\begin{document}$ M $\end{document} ) of these more general results. The simulations help show that the algorithm well approximates the empirical distribution of the hidden states for large\begin{document}$ M = 1 $\end{document} .\begin{document}$ M $\end{document}