skip to main content


Title: Linear quadratic stochastic optimal control problems with operator coefficients: open-loop solutions
An optimal control problem is considered for linear stochastic differential equations with quadratic cost functional. The coefficients of the state equation and the weights in the cost functional are bounded operators on the spaces of square integrable random variables. The main motivation of our study is linear quadratic (LQ, for short) optimal control problems for mean-field stochastic differential equations. Open-loop solvability of the problem is characterized as the solvability of a system of linear coupled forward-backward stochastic differential equations (FBSDE, for short) with operator coefficients, together with a convexity condition for the cost functional. Under proper conditions, the well-posedness of such an FBSDE, which leads to the existence of an open-loop optimal control, is established. Finally, as applications of our main results, a general mean-field LQ control problem and a concrete mean-variance portfolio selection problem in the open-loop case are solved.  more » « less
Award ID(s):
1812921
NSF-PAR ID:
10341966
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
25
ISSN:
1292-8119
Page Range / eLocation ID:
17
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper is concerned with two-person mean-field linear-quadratic non-zero sum stochastic differential games in an infinite horizon. Both open-loop and closed-loop Nash equilibria are introduced. The existence of an open-loop Nash equilibrium is characterized by the solvability of a system of mean-field forward-backward stochastic differential equations in an infinite horizon and the convexity of the cost functionals, and the closed-loop representation of an open-loop Nash equilibrium is given through the solution to a system of two coupled non-symmetric algebraic Riccati equations. The existence of a closed-loop Nash equilibrium is characterized by the solvability of a system of two coupled symmetric algebraic Riccati equations. Two-person mean-field linear-quadratic zero-sum stochastic differential games in an infinite horizon are also considered. Both the existence of open-loop and closed-loop saddle points are characterized by the solvability of a system of two coupled generalized algebraic Riccati equations with static stabilizing solutions. Mean-field linear-quadratic stochastic optimal control problems in an infinite horizon are discussed as well, for which it is proved that the open-loop solvability and closed-loop solvability are equivalent. 
    more » « less
  2. This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory. 
    more » « less
  3. Buttazzo, G. ; Casas, E. ; de Teresa, L. ; Glowinski, R. ; Leugering, G. ; Trélat, E. ; Zhang, X. (Ed.)

    In our present article, we follow our way of developing mean field type control theory in our earlier works [Bensoussanet al., Mean Field Games and Mean Field Type Control Theory.Springer, New York (2013)], by first introducing the Bellman and then master equations, the system of Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck (FP) equations, and then tackling them by looking for the semi-explicit solution for the linear quadratic case, especially with an arbitrary initial distribution; such a problem, being left open for long, has not been specifically dealt with in the earlier literature, such as Bensoussan [Stochastic Control of Partially Observable Systems. Cambridge University Press, (1992)] and Nisio [Stochastic control theory: Dynamic programming principle. Springer (2014)], which only tackled the linear quadratic setting with Gaussian initial distributions. Thanks to the effective mean-field theory, we propose a solution to this long standing problem of the general non-Gaussian case. Besides, our problem considered here can be reduced to the model in Bandiniet al.[Stochastic Process. Appl.129(2019) 674–711], which is fundamentally different from our present proposed framework.

     
    more » « less
  4. Editor-in-Chief: George Yin (Ed.)
    This paper presents approaches to mean-field control, motivated by distributed control of multi-agent systems. Control solutions are based on a convex optimization problem, whose domain is a convex set of probability mass functions (pmfs). The main contributions follow: 1. Kullback-Leibler-Quadratic (KLQ) optimal control is a special case, in which the objective function is composed of a control cost in the form of Kullback-Leibler divergence between a candidate pmf and the nominal, plus a quadratic cost on the sequence of marginals. Theory in this paper extends prior work on deterministic control systems, establishing that the optimal solution is an exponential tilting of the nominal pmf. Transform techniques are introduced to reduce complexity of the KLQ solution, motivated by the need to consider time horizons that are much longer than the inter-sampling times required for reliable control. 2. Infinite-horizon KLQ leads to a state feedback control solution with attractive properties. It can be expressed as either state feedback, in which the state is the sequence of marginal pmfs, or an open loop solution is obtained that is more easily computed. 3. Numerical experiments are surveyed in an application of distributed control of residential loads to provide grid services, similar to utility-scale battery storage. The results show that KLQ optimal control enables the aggregate power consumption of a collection of flexible loads to track a time-varying reference signal, while simultaneously ensuring each individual load satisfies its own quality of service constraints. 
    more » « less
  5. Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of Kalman-Bucy to the nonlinear filter: The state evolves as a continuous-time Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of Kalman-Bucy is shown to be a special case. 
    more » « less