skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Physics-Informed Graph Neural Operator for Mean Field Games on Graph: A Scalable Learning Approach
Mean-field games (MFGs) are developed to model the decision-making processes of a large number of interacting agents in multi-agent systems. This paper studies mean-field games on graphs (G-MFGs). The equilibria of G-MFGs, namely, mean-field equilibria (MFE), are challenging to solve for their high-dimensional action space because each agent has to make decisions when they are at junction nodes or on edges. Furthermore, when the initial population state varies on graphs, we have to recompute MFE, which could be computationally challenging and memory-demanding. To improve the scalability and avoid repeatedly solving G-MFGs every time their initial state changes, this paper proposes physics-informed graph neural operators (PIGNO). The PIGNO utilizes a graph neural operator to generate population dynamics, given initial population distributions. To better train the neural operator, it leverages physics knowledge to propagate population state transitions on graphs. A learning algorithm is developed, and its performance is evaluated on autonomous driving games on road networks. Our results demonstrate that the PIGNO is scalable and generalizable when tested under unseen initial conditions.  more » « less
Award ID(s):
1943998
PAR ID:
10560486
Author(s) / Creator(s):
; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Games
Volume:
15
Issue:
2
ISSN:
2073-4336
Page Range / eLocation ID:
12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper proposes a scalable learning framework to solve a system of coupled forward–backward partial differential equations (PDEs) arising from mean field games (MFGs). The MFG system incorporates a forward PDE to model the propagation of population dynamics and a backward PDE for a representative agent’s optimal control. Existing work mainly focus on solving the mean field game equilibrium (MFE) of the MFG system when given fixed boundary conditions, including the initial population state and terminal cost. To obtain MFE efficiently, particularly when the initial population density and terminal cost vary, we utilize a physics-informed neural operator (PINO) to tackle the forward–backward PDEs. A learning algorithm is devised and its performance is evaluated on one application domain, which is the autonomous driving velocity control. Numerical experiments show that our method can obtain the MFE accurately when given different initial distributions of vehicles. The PINO exhibits both memory efficiency and generalization capabilities compared to physics-informed neural networks (PINNs). 
    more » « less
  2. null (Ed.)
    Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are un- known in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications. 
    more » « less
  3. null (Ed.)
    Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications. 
    more » « less
  4. We study a class of linear-quadratic stochastic differential games in which each player interacts directly only with its nearest neighbors in a given graph. We find a semiexplicit Markovian equilibrium for any transitive graph, in terms of the empirical eigenvalue distribution of the graph’s normalized Laplacian matrix. This facilitates large-population asymptotics for various graph sequences, with several sparse and dense examples discussed in detail. In particular, the mean field game is the correct limit only in the dense graph case, that is, when the degrees diverge in a suitable sense. Although equilibrium strategies are nonlocal, depending on the behavior of all players, we use a correlation decay estimate to prove a propagation of chaos result in both the dense and sparse regimes, with the sparse case owing to the large distances between typical vertices. Without assuming the graphs are transitive, we show also that the mean field game solution can be used to construct decentralized approximate equilibria on any sufficiently dense graph sequence. 
    more » « less
  5. This paper studies stochastic games on large graphs and their graphon limits. We propose a new formulation of graphon games based on a single typical player’s label-state distribution. In contrast, other recently proposed models of graphon games work directly with a continuum of players, which involves serious measure-theoretic technicalities. In fact, by viewing the label as a component of the state process, we show in our formulation that graphon games are a special case of mean field games, albeit with certain inevitable degeneracies and discontinuities that make most existing results on mean field games inapplicable. Nonetheless, we prove the existence of Markovian graphon equilibria under fairly general assumptions as well as uniqueness under a monotonicity condition. Most importantly, we show how our notion of graphon equilibrium can be used to construct approximate equilibria for large finite games set on any (weighted, directed) graph that converges in cut norm. The lack of players’ exchangeability necessitates a careful definition of approximate equilibrium, allowing heterogeneity among the players’ approximation errors, and we show how various regularity properties of the model inputs and underlying graphon lead naturally to different strengths of approximation. Funding: D. Lacker was partially supported by the Air Force Office of Scientific Research [Grant FA9550-19-1-0291] and the National Science Foundation [Award DMS-2045328]. 
    more » « less