skip to main content


Title: Extended Mean Field Control Problems: stochastic maximum principle and transport perspective
We study Mean Field stochastic control problems where the cost function and the state dynamics depend upon the joint distribution of the controlled state and the control process. We prove suitable versions of the Pontryagin stochastic maximum principle, both in necessary and in sufficient form, which extend the known conditions to this general framework. Furthermore, we suggest a variational approach to study a weak formulation of these control problems. We show a natural connection between this weak formulation and optimal transport on path space, which inspires a novel discretization scheme.  more » « less
Award ID(s):
1716673
NSF-PAR ID:
10074614
Author(s) / Creator(s):
Date Published:
Journal Name:
arXiv.org
ISSN:
2331-8422
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with problems in the “martingale formulation” with particular ease. We give two illustrations; first, we establish the DPP for general controlled diffusions and show that their value functions are viscosity solutions of the associated Hamilton–Jacobi–Bellman equations under minimal conditions. After that, we show how to treat singular control on the example of the classical monotone-follower problem. 
    more » « less
  2. null (Ed.)
    We develop a probabilistic approach to continuous-time finite state mean field games. Based on an alternative description of continuous-time Markov chains by means of semimartingales and the weak formulation of stochastic optimal control, our approach not only allows us to tackle the mean field of states and the mean field of control at the same time, but also extends the strategy set of players from Markov strategies to closed-loop strategies. We show the existence and uniqueness of Nash equilibrium for the mean field game as well as how the equilibrium of a mean field game consists of an approximative Nash equilibrium for the game with a finite number of players under different assumptions of structure and regularity on the cost functions and transition rate between states. 
    more » « less
  3. We develop a probabilistic approach to continuous-time finite state mean field games. Based on an alternative description of continuous-time Markov chain by means of semimartingale and the weak formulation of stochastic optimal control, our approach not only allows us to tackle the mean field of states and the mean field of control in the same time, but also extend the strategy set of players from Markov strategies to closed-loop strategies. We show the existence and uniqueness of Nash equilibrium for the mean field game, as well as how the equilibrium of mean field game consists of an approximative Nash equilibrium for the game with finite number of players under different assumptions of structure and regularity on the cost functions and transition rate between states. 
    more » « less
  4. We study the stochastic optimization of canonical correlation analysis (CCA), whose objective is nonconvex and does not decouple over training samples. Although several stochastic gradient based optimization algorithms have been recently proposed to solve this problem, no global convergence guarantee was provided by any of them. Inspired by the alternating least squares/power iterations formulation of CCA, and the shift-and-invert preconditioning method for PCA, we propose two globally convergent meta-algorithms for CCA, both of which transform the original problem into sequences of least squares problems that need only be solved approximately. We instantiate the meta-algorithms with state-of-the-art SGD methods and obtain time complexities that significantly improve upon that of previous work. Experimental results demonstrate their superior performance. 
    more » « less
  5. null (Ed.)
    We study a sequence of many-agent exit time stochastic control problems, parameterized by the number of agents, with risk-sensitive cost structure. We identify a fully characterizing assumption, under which each such control problem corresponds to a risk-neutral stochastic control problem with additive cost, and sequentially to a risk-neutral stochastic control problem on the simplex that retains only the distribution of states of agents, while discarding further specific information about the state of each agent. Under some additional assumptions, we also prove that the sequence of value functions of these stochastic control problems converges to the value function of a deterministic control problem, which can be used for the design of nearly optimal controls for the original problem, when the number of agents is sufficiently large. 
    more » « less