skip to main content


Title: Linear quadratic stochastic optimal control problems with operator coefficients: open-loop solutions
An optimal control problem is considered for linear stochastic differential equations with quadratic cost functional. The coefficients of the state equation and the weights in the cost functional are bounded operators on the spaces of square integrable random variables. The main motivation of our study is linear quadratic (LQ, for short) optimal control problems for mean-field stochastic differential equations. Open-loop solvability of the problem is characterized as the solvability of a system of linear coupled forward-backward stochastic differential equations (FBSDE, for short) with operator coefficients, together with a convexity condition for the cost functional. Under proper conditions, the well-posedness of such an FBSDE, which leads to the existence of an open-loop optimal control, is established. Finally, as applications of our main results, a general mean-field LQ control problem and a concrete mean-variance portfolio selection problem in the open-loop case are solved.  more » « less
Award ID(s):
1812921
NSF-PAR ID:
10341966
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
25
ISSN:
1292-8119
Page Range / eLocation ID:
17
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper is concerned with two-person mean-field linear-quadratic non-zero sum stochastic differential games in an infinite horizon. Both open-loop and closed-loop Nash equilibria are introduced. The existence of an open-loop Nash equilibrium is characterized by the solvability of a system of mean-field forward-backward stochastic differential equations in an infinite horizon and the convexity of the cost functionals, and the closed-loop representation of an open-loop Nash equilibrium is given through the solution to a system of two coupled non-symmetric algebraic Riccati equations. The existence of a closed-loop Nash equilibrium is characterized by the solvability of a system of two coupled symmetric algebraic Riccati equations. Two-person mean-field linear-quadratic zero-sum stochastic differential games in an infinite horizon are also considered. Both the existence of open-loop and closed-loop saddle points are characterized by the solvability of a system of two coupled generalized algebraic Riccati equations with static stabilizing solutions. Mean-field linear-quadratic stochastic optimal control problems in an infinite horizon are discussed as well, for which it is proved that the open-loop solvability and closed-loop solvability are equivalent. 
    more » « less
  2. This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory. 
    more » « less
  3. Buttazzo, G. ; Casas, E. ; de Teresa, L. ; Glowinski, R. ; Leugering, G. ; Trélat, E. ; Zhang, X. (Ed.)

    In our present article, we follow our way of developing mean field type control theory in our earlier works [Bensoussanet al., Mean Field Games and Mean Field Type Control Theory.Springer, New York (2013)], by first introducing the Bellman and then master equations, the system of Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck (FP) equations, and then tackling them by looking for the semi-explicit solution for the linear quadratic case, especially with an arbitrary initial distribution; such a problem, being left open for long, has not been specifically dealt with in the earlier literature, such as Bensoussan [Stochastic Control of Partially Observable Systems. Cambridge University Press, (1992)] and Nisio [Stochastic control theory: Dynamic programming principle. Springer (2014)], which only tackled the linear quadratic setting with Gaussian initial distributions. Thanks to the effective mean-field theory, we propose a solution to this long standing problem of the general non-Gaussian case. Besides, our problem considered here can be reduced to the model in Bandiniet al.[Stochastic Process. Appl.129(2019) 674–711], which is fundamentally different from our present proposed framework.

     
    more » « less
  4. Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of Kalman-Bucy to the nonlinear filter: The state evolves as a continuous-time Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of Kalman-Bucy is shown to be a special case. 
    more » « less
  5. The spike variation technique plays a crucial role in deriving Pontryagin's type maximum principle of optimal controls for ordinary differential equations (ODEs), partial differential equations (PDEs), stochastic differential equations (SDEs), and (deterministic forward) Volterra integral equations (FVIEs), when the control domains are not assumed to be convex. It is natural to expect that such a technique could be extended to the case of (forward) stochastic Volterra integral equations (FSVIEs). However, by mimicking the case of SDEs, one encounters an essential difficulty of handling an involved quadratic term. To overcome this difficulty, we introduce an auxiliary process for which one can use It\^o's formula, and develop new technologies inspired by stochastic linear-quadratic optimal control problems. Then the suitable representation of the above-mentioned quadratic form is obtained, and the second-order adjoint equations are derived. Consequently, the maximum principle of Pontryagin type is established. Some relevant extensions are investigated as well. 
    more » « less