This content will become publicly available on January 1, 2025
- Award ID(s):
- 2153468
- PAR ID:
- 10538645
- Publisher / Repository:
- IEEE Transactions on Control of Network Systems
- Date Published:
- Journal Name:
- IEEE Transactions on Control of Network Systems
- ISSN:
- 2372-2533
- Page Range / eLocation ID:
- 1 to 12
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Jin, Shi (Ed.)In this paper, we apply the idea of fictitious play to design deep neural networks (DNNs), and develop deep learning theory and algorithms for computing the Nash equilibrium of asymmetric N-player non-zero-sum stochastic differential games, for which we refer as deep fictitious play, a multi-stage learning process. Specifically at each stage, we propose the strategy of letting individual player optimize her own payoff subject to the other players’ previous actions, equivalent to solving N decoupled stochastic control optimization problems, which are approximated by DNNs. Therefore, the fictitious play strategy leads to a structure consisting of N DNNs, which only communicate at the end of each stage. The resulting deep learning algorithm based on fictitious play is scalable, parallel and model-free, i.e., using GPU parallelization, it can be applied to any N-player stochastic differential game with different symmetries and heterogeneities (e.g., existence of major players). We illustrate the performance of the deep learning algorithm by comparing to the closed-form solution of the linear quadratic game. Moreover, we prove the convergence of fictitious play under appropriate assumptions, and verify that the convergent limit forms an open-loop Nash equilibrium. We also discuss the extensions to other strategies designed upon fictitious play and closed-loop Nash equilibrium in the end.more » « less
-
The aim of this paper is to find the distributed solution of the generalized Nash equilibrium problem (GNEP) for a group of players that can communicate with each other over a connected communication network. Each player tries to minimize a local objective function of its own that may depend on the other players’ decisions, and collectively all the players’ decisions are subject to some globally shared resource constraints. After reformulating the local optimization problems, we introduce the notion of network Lagrangian and recast the GNEP as the zero finding problem of a properly defined operator. Utilizing the Douglas-Rachford operator splitting method, a distributed algorithm is proposed that requires only local information exchanges between neighboring players in each iteration. The convergence of the proposed algorithm to an exact variational generalized Nash equilibrium is established under two different sets of assumptions. The effectiveness of the proposed algorithm is demonstrated using the example of a Nash-Cournot production game.more » « less
-
Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large
-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into\begin{document}$ N $\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an\begin{document}$ N $\end{document} -Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.\begin{document}$ \epsilon $\end{document} -
null (Ed.)The study of linear-quadratic stochastic differential games on directed networks was initiated in Feng, Fouque & Ichiba [7]. In that work, the game on a directed chain with finite or infinite players was defined as well as the game on a deterministic directed tree, and their Nash equilibria were computed. The current work continues the analysis by first developing a random directed chain structure by assuming the interaction between every two neighbors is random. We solve explicitly for an open-loop Nash equilibrium for the system and we find that the dynamics under equilibrium is an infinite-dimensional Gaussian process described by a Catalan Markov chain introduced in [7]. The discussion about stochastic differential games is extended to a random two-sided directed chain and a random directed tree structure.more » « less
-
The study of linear-quadratic stochastic differential games on directed networks was initiated in Feng, Fouque & Ichiba [7]. In that work, the game on a directed chain with finite or infinite players was defined as well as the game on a deterministic directed tree, and their Nash equilibria were computed. The current work continues the analysis by first developing a random directed chain structure by assuming the interaction between every two neighbors is random. We solve explicitly for an open-loop Nash equilibrium for the system and we find that the dynamics under equilibrium is an infinite-dimensional Gaussian process described by a Catalan Markov chain introduced in [7]. The discussion about stochastic differential games is extended to a random two-sided directed chain and a random directed tree structure.more » « less