We develop a general reinforcement learning framework for mean field control (MFC) problems. Such problems arise for instance as the limit of collaborative multi-agent control problems when the number of agents is very large. The asymptotic problem can be phrased as the optimal control of a non-linear dynamics. This can also be viewed as a Markov decision process (MDP) but the key difference with the usual RL setup is that the dynamics and the reward now depend on the state's probability distribution itself. Alternatively, it can be recast as a MDP on the Wasserstein space of measures. In this work, we introduce generic model-free algorithms based on the state-action value function at the mean field level and we prove convergence for a prototypical Q-learning method. We then implement an actor-critic method and report numerical results on two archetypal problems: a finite space model motivated by a cyber security application and a continuous space model motivated by an application to swarm motion.
more »
« less
High order computation of optimal transport, mean field planning, and potential mean field games
More Like this
-
-
We consider heterogeneously interacting diffusive particle systems and their large population limit. The interaction is of mean field type with weights characterized by an underlying graphon. A law of large numbers result is established as the system size increases and the underlying graphons converge. The limit is given by a graphon mean field system consisting of independent but heterogeneous nonlinear diffusions whose probability distributions are fully coupled. Well-posedness, continuity and stability of such systems are provided. We also consider a not-so-dense analogue of the finite particle system, obtained by percolation with vanishing rates and suitable scaling of interactions. A law of large numbers result is proved for the convergence of such systems to the corresponding graphon mean field system.more » « less
-
This work studies the behaviors of two large-population teams competing in a discrete environment. The team-level interactions are modeled as a zero-sum game while the agent dynamics within each team is formulated as a collaborative mean-field team problem. Drawing inspiration from the mean-field literature, we first approximate the large-population team game with its infinite-population limit. Subsequently, we construct a fictitious centralized system and transform the infinite-population game to an equivalent zero-sum game between two coordinators. Via a novel reachability analysis, we study the optimality of coordination strategies, which induce decentralized strategies under the original information structure. The optimality of the resulting strategies is established in the original finite-population game, and the theoretical guarantees are verified by numerical examples.more » « less
An official website of the United States government

