 Award ID(s):
 1812921
 NSFPAR ID:
 10341966
 Date Published:
 Journal Name:
 ESAIM: Control, Optimisation and Calculus of Variations
 Volume:
 25
 ISSN:
 12928119
 Page Range / eLocation ID:
 17
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

This paper is concerned with twoperson meanfield linearquadratic nonzero sum stochastic differential games in an infinite horizon. Both openloop and closedloop Nash equilibria are introduced. The existence of an openloop Nash equilibrium is characterized by the solvability of a system of meanfield forwardbackward stochastic differential equations in an infinite horizon and the convexity of the cost functionals, and the closedloop representation of an openloop Nash equilibrium is given through the solution to a system of two coupled nonsymmetric algebraic Riccati equations. The existence of a closedloop Nash equilibrium is characterized by the solvability of a system of two coupled symmetric algebraic Riccati equations. Twoperson meanfield linearquadratic zerosum stochastic differential games in an infinite horizon are also considered. Both the existence of openloop and closedloop saddle points are characterized by the solvability of a system of two coupled generalized algebraic Riccati equations with static stabilizing solutions. Meanfield linearquadratic stochastic optimal control problems in an infinite horizon are discussed as well, for which it is proved that the openloop solvability and closedloop solvability are equivalent.more » « less

This paper is concerned with an optimal control problem for a meanfield linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and meansquare turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory.more » « less

Buttazzo, G. ; Casas, E. ; de Teresa, L. ; Glowinski, R. ; Leugering, G. ; Trélat, E. ; Zhang, X. (Ed.)
In our present article, we follow our way of developing mean field type control theory in our earlier works [Bensoussan
et al., Mean Field Games and Mean Field Type Control Theory. Springer, New York (2013)], by first introducing the Bellman and then master equations, the system of HamiltonJacobiBellman (HJB) and FokkerPlanck (FP) equations, and then tackling them by looking for the semiexplicit solution for the linear quadratic case, especially with an arbitrary initial distribution; such a problem, being left open for long, has not been specifically dealt with in the earlier literature, such as Bensoussan [Stochastic Control of Partially Observable Systems. Cambridge University Press, (1992)] and Nisio [Stochastic control theory: Dynamic programming principle. Springer (2014)], which only tackled the linear quadratic setting with Gaussian initial distributions. Thanks to the effective meanfield theory, we propose a solution to this long standing problem of the general nonGaussian case. Besides, our problem considered here can be reduced to the model in Bandiniet al. [Stochastic Process. Appl. 129 (2019) 674–711], which is fundamentally different from our present proposed framework. 
Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of KalmanBucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of KalmanBucy to the nonlinear filter: The state evolves as a continuoustime Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of KalmanBucy is shown to be a special case.more » « less

The spike variation technique plays a crucial role in deriving Pontryagin's type maximum principle of optimal controls for ordinary differential equations (ODEs), partial differential equations (PDEs), stochastic differential equations (SDEs), and (deterministic forward) Volterra integral equations (FVIEs), when the control domains are not assumed to be convex. It is natural to expect that such a technique could be extended to the case of (forward) stochastic Volterra integral equations (FSVIEs). However, by mimicking the case of SDEs, one encounters an essential difficulty of handling an involved quadratic term. To overcome this difficulty, we introduce an auxiliary process for which one can use It\^o's formula, and develop new technologies inspired by stochastic linearquadratic optimal control problems. Then the suitable representation of the abovementioned quadratic form is obtained, and the secondorder adjoint equations are derived. Consequently, the maximum principle of Pontryagin type is established. Some relevant extensions are investigated as well.more » « less