We consider the problem of designing a feedback controller that guides the input and output of a linear time-invariant system to a minimizer of a convex optimization problem. The system is subject to an unknown disturbance that determines the feasible set defined by the system equilibrium constraints. Our proposed design enforces the Karush-Kuhn-Tucker optimality conditions in steady-state without incorporating dual variables into the controller. We prove that the input and output variables achieve optimality in equilibrium and outline two procedures for designing controllers that stabilize the closed-loop system. We explore key ideas through simple examples and simulations.
more »
« less
An integral quadratic constraint framework for real-time steady-state optimization of linear time-invariant systems
Achieving optimal steady-state performance in real-time is an increasingly necessary requirement of many critical infrastructure systems. In pursuit of this goal, this paper builds a systematic design framework of feedback controllers for Linear Time-Invariant (LTI) systems that continuously track the optimal solution of some predefined optimization problem. We logically divide the proposed solution into three components. The first component estimates the system state from the output measurements. The second component uses the estimated state and computes a drift direction based on an optimization algorithm. The third component calculates an input to the LTI system that aims to drive the system toward the optimal steady-state. We analyze the equilibrium characteristics of the closed-loop system and provide conditions for optimality and stability. Our analysis shows that the proposed solution guarantees optimal steady-state performance, even in the presence of constant disturbances. Furthermore, by leveraging recent results on the analysis of optimization algorithms using Integral Quadratic Constraints (IQCs), the proposed framework can translate input-output properties of our optimization component into sufficient conditions, based on linear matrix inequalities (LMIs), for global exponential asymptotic stability of the closed-loop system. We illustrate several resulting controller designs using a numerical example.
more »
« less
- PAR ID:
- 10106017
- Date Published:
- Journal Name:
- 2018 Annual American Control Conference (ACC)
- Page Range / eLocation ID:
- 597 to 603
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Closed-loop stability of uncertain linear systems is studied under the state feedback realized by a linear quadratic regulator (LQR). Sufficient conditions are presented that ensure the closed-loop stability in the presence of uncertainty, initially for the case of a non-robust LQR designed for a nominal model not reflecting the system uncertainty. Since these conditions are usually violated for a large uncertainty, a procedure is offered to redesign such a non-robust LQR into a robust one that ensures closed-loop stability under a predefined level of uncertainty. The analysis of this paper largely relies on the concept of inverse optimal control to construct suitable performance measures for uncertain linear systems, which are non-quadratic in structure but yield optimal controls in the form of LQR. The relationship between robust LQR and zero-sum linear quadratic dynamic games is established.more » « less
-
Deep reinforcement learning approaches are becoming appealing for the design of nonlinear controllers for voltage control problems, but the lack of stability guarantees hinders their real-world deployment. This letter constructs a decentralized RL-based controller for inverter-based real-time voltage control in distribution systems. It features two components: a transient control policy and a steady-state performance optimizer. The transient policy is parameterized as a neural network, and the steady-state optimizer represents the gradient of the long-term operating cost function. The two parts are synthesized through a safe gradient flow framework, which prevents the violation of reactive power capacity constraints. We prove that if the output of the transient controller is bounded and monotonically decreasing with respect to its input, then the closed-loop system is asymptotically stable and converges to the optimal steady-state solution. We demonstrate the effectiveness of our method by conducting experiments with IEEE 13-bus and 123-bus distribution system test feeders.more » « less
-
We consider the problem of designing a feedback controller that guides the input and output of a linear time-invariant system to a minimizer of a convex optimization problem. The system is subject to an unknown disturbance, piecewise constant in time, which shifts the feasible set defined by the system equilibrium constraints. Our proposed design combines proportional-integral control with gradient feedback, and enforces the Karush-Kuhn-Tucker optimality conditions in steady-state without incorporating dual variables into the controller. We prove that the input and output variables achieve optimality in steady-state, and provide a stability criterion based on absolute stability theory. The effectiveness of our approach is illustrated on a simple example system.more » « less
-
Matni, N ; Morari, M ; Pappas, G J (Ed.)In this paper, we propose a robust reinforcement learning method for a class of linear discrete-time systems to handle model mismatches that may be induced by sim-to-real gap. Under the formulation of risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to iteratively approximate the robust and optimal controller. The convergence and robustness of the dual-loop policy optimization algorithm are rigorously analyzed. It is shown that the dual-loop policy optimization algorithm uniformly converges to the optimal solution. In addition, by invoking the concept of small-disturbance input-to-state stability, it is guaranteed that the dual-loop policy optimization algorithm still converges to a neighborhood of the optimal solution when the algorithm is subject to a sufficiently small disturbance at each step. When the system matrices are unknown, a learning-based off-policy policy optimization algorithm is proposed for the same class of linear systems with additive Gaussian noise. The numerical simulation is implemented to demonstrate the efficacy of the proposed algorithm.more » « less