skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Extended Mean Field Control Problems: stochastic maximum principle and transport perspective
We study Mean Field stochastic control problems where the cost function and the state dynamics depend upon the joint distribution of the controlled state and the control process. We prove suitable versions of the Pontryagin stochastic maximum principle, both in necessary and in sufficient form, which extend the known conditions to this general framework. Furthermore, we suggest a variational approach to study a weak formulation of these control problems. We show a natural connection between this weak formulation and optimal transport on path space, which inspires a novel discretization scheme.  more » « less
Award ID(s):
1716673
PAR ID:
10074614
Author(s) / Creator(s):
Date Published:
Journal Name:
arXiv.org
ISSN:
2331-8422
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study a stochastic control/stopping problem with a series of inequality-type and equality-type expectation constraints in a general non-Markovian framework. We demonstrate that the stochastic control/stopping problem with expectation constraints (CSEC) is independent of a specific probability setting and is equivalent to the constrained stochastic control/stopping problem in weak formulation (an optimization over joint laws of Brownian motion, state dynamics, diffusion controls and stopping rules on an enlarged canonical space). Using a martingale-problem formulation of controlled SDEs in spirit of Stroock and Varadhan (2006), we characterize the probability classes in weak formulation by countably many actions of canonical processes, and thus obtain the upper semi-analyticity of the CSEC value function. Then we employ a measurable selection argument to establish a dynamic programming principle (DPP) in weak formulation for the CSEC value function, in which the conditional expected costs act as additional states for constraint levels at the intermediate horizon. This article extends (El Karoui and Tan, 2013) to the expectation-constraint case. We extend our previous work (Bayraktar and Yao, 2024) to the more complicated setting where the diffusion is controlled. Compared to that paper the topological properties of diffusion-control spaces and the corresponding measurability are more technically involved which complicate the arguments especially for the measurable selection for the super-solution side of DPP in the weak formulation. 
    more » « less
  2. We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with problems in the “martingale formulation” with particular ease. We give two illustrations; first, we establish the DPP for general controlled diffusions and show that their value functions are viscosity solutions of the associated Hamilton–Jacobi–Bellman equations under minimal conditions. After that, we show how to treat singular control on the example of the classical monotone-follower problem. 
    more » « less
  3. null (Ed.)
    We study a sequence of many-agent exit time stochastic control problems, parameterized by the number of agents, with risk-sensitive cost structure. We identify a fully characterizing assumption, under which each such control problem corresponds to a risk-neutral stochastic control problem with additive cost, and sequentially to a risk-neutral stochastic control problem on the simplex that retains only the distribution of states of agents, while discarding further specific information about the state of each agent. Under some additional assumptions, we also prove that the sequence of value functions of these stochastic control problems converges to the value function of a deterministic control problem, which can be used for the design of nearly optimal controls for the original problem, when the number of agents is sufficiently large. 
    more » « less
  4. This paper presents convergence analysis of a novel data-driven feedback control algorithm designed for generating online controls based on partial noisy observational data. The algorithm comprises a particle filter-enabled state estimation component, estimating the controlled system’s state via indirect observations, alongside an efficient stochastic maximum principle-type optimal control solver. By integrating weak convergence techniques for the particle filter with convergence analysis for the stochastic maximum principle control solver, we derive a weak convergence result for the optimization procedure in search of optimal data-driven feedback control. Numerical experiments are performed to validate the theoretical findings. 
    more » « less
  5. We develop a probabilistic approach to continuous-time finite state mean field games. Based on an alternative description of continuous-time Markov chain by means of semimartingale and the weak formulation of stochastic optimal control, our approach not only allows us to tackle the mean field of states and the mean field of control in the same time, but also extend the strategy set of players from Markov strategies to closed-loop strategies. We show the existence and uniqueness of Nash equilibrium for the mean field game, as well as how the equilibrium of mean field game consists of an approximative Nash equilibrium for the game with finite number of players under different assumptions of structure and regularity on the cost functions and transition rate between states. 
    more » « less