skip to main content


Title: Finite-horizon covariance control of linear time-varying systems
We consider the problem of finite-horizon optimal control of a discrete linear time-varying system subject to a stochastic disturbance and fully observable state. The initial state of the system is drawn from a known Gaussian distribution, and the final state distribution is required to reach a given target Gaussian distribution, while minimizing the expected value of the control effort. We derive the linear optimal control policy by first presenting an efficient solution for the diffusion-less case, and we then solve the case with diffusion by reformulating the system as a superposition of diffusion-less systems. We show that the resulting solution coincides with a LQG problem with particular terminal cost weight matrix.  more » « less
Award ID(s):
1662523
NSF-PAR ID:
10077891
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Conference on Decision and Control
Page Range / eLocation ID:
3606 to 3611
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the decentralized control of radial distribution systems with controllable photovoltaic inverters and energy storage resources. For such systems, we investigate the problem of designing fully decentralized controllers that minimize the expected cost of balancing demand, while guaranteeing the satisfaction of individual resource and distribution system voltage constraints. Employing a linear approximation of the branch flow model, we formulate this problem as the design of a decentralized disturbance-feedback controller that minimizes the expected value of a convex quadratic cost function, subject to robust convex quadratic constraints on the system state and input. As such problems are, in general, computationally intractable, we derive a tractable inner approximation to this decentralized control problem, which enables the efficient computation of an affine control policy via the solution of a finite-dimensional conic program. As affine policies are, in general, suboptimal for the family of systems considered, we provide an efficient method to bound their suboptimality via the optimal solution of another finite-dimensional conic program. A case study of a 12 kV radial distribution system demonstrates that decentralized affine controllers can perform close to optimal. 
    more » « less
  2. We consider the decentralized control of radial distribution systems with controllable photovoltaic inverters and storage devices. For such systems, we consider the problem of designing controllers that minimize the expected cost of meeting demand, while respecting distribution system and resource constraints. Employing a linear approximation of the branch flow model, we formulate this problem as the design of a decentralized disturbance-feedback controller that minimizes the expected value of a convex quadratic cost function, subject to convex quadratic constraints on the state and input. As such problems are, in general, computationally intractable, we derive an inner approximation to this decentralized control problem, which enables the efficient computation of an affine control policy via the solution of a conic program. As affine policies are, in general, suboptimal for the systems considered, we provide an efficient method to bound their suboptimality via the solution of another conic program. A case study of a 12 kV radial distribution feeder demonstrates that decentralized affine controllers can perform close to optimal. 
    more » « less
  3. Buttazzo, G. ; Casas, E. ; de Teresa, L. ; Glowinski, R. ; Leugering, G. ; Trélat, E. ; Zhang, X. (Ed.)

    In our present article, we follow our way of developing mean field type control theory in our earlier works [Bensoussanet al., Mean Field Games and Mean Field Type Control Theory.Springer, New York (2013)], by first introducing the Bellman and then master equations, the system of Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck (FP) equations, and then tackling them by looking for the semi-explicit solution for the linear quadratic case, especially with an arbitrary initial distribution; such a problem, being left open for long, has not been specifically dealt with in the earlier literature, such as Bensoussan [Stochastic Control of Partially Observable Systems. Cambridge University Press, (1992)] and Nisio [Stochastic control theory: Dynamic programming principle. Springer (2014)], which only tackled the linear quadratic setting with Gaussian initial distributions. Thanks to the effective mean-field theory, we propose a solution to this long standing problem of the general non-Gaussian case. Besides, our problem considered here can be reduced to the model in Bandiniet al.[Stochastic Process. Appl.129(2019) 674–711], which is fundamentally different from our present proposed framework.

     
    more » « less
  4. Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of Kalman-Bucy to the nonlinear filter: The state evolves as a continuous-time Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of Kalman-Bucy is shown to be a special case. 
    more » « less
  5. A linear-quadratic optimal control problem for a forward stochastic Volterra integral equation (FSVIE) is considered. Under the usual convexity conditions, open-loop optimal control exists, which can be characterized by the optimality system, a coupled system of an FSVIE and a type-II backward SVIE (BSVIE). To obtain a causal state feedback representation for the open-loop optimal control, a path-dependent Riccati equation for an operator-valued function is introduced, via which the optimality system can be decoupled. In the process of decoupling, a type-III BSVIE is introduced whose adapted solution can be used to represent the adapted M-solution of the corresponding type-II BSVIE. Under certain conditions, it is proved that the path-dependent Riccati equation admits a unique solution, which means that the decoupling field for the optimality system is found. Therefore, a causal state feedback representation of the open-loop optimal control is constructed. An additional interesting finding is that when the control only appears in the diffusion term, not in the drift term of the state system, the causal state feedback reduces to a Markovian state feedback. 
    more » « less