skip to main content


Title: An integral quadratic constraint framework for real-time steady-state optimization of linear time-invariant systems
Achieving optimal steady-state performance in real-time is an increasingly necessary requirement of many critical infrastructure systems. In pursuit of this goal, this paper builds a systematic design framework of feedback controllers for Linear Time-Invariant (LTI) systems that continuously track the optimal solution of some predefined optimization problem. We logically divide the proposed solution into three components. The first component estimates the system state from the output measurements. The second component uses the estimated state and computes a drift direction based on an optimization algorithm. The third component calculates an input to the LTI system that aims to drive the system toward the optimal steady-state. We analyze the equilibrium characteristics of the closed-loop system and provide conditions for optimality and stability. Our analysis shows that the proposed solution guarantees optimal steady-state performance, even in the presence of constant disturbances. Furthermore, by leveraging recent results on the analysis of optimization algorithms using Integral Quadratic Constraints (IQCs), the proposed framework can translate input-output properties of our optimization component into sufficient conditions, based on linear matrix inequalities (LMIs), for global exponential asymptotic stability of the closed-loop system. We illustrate several resulting controller designs using a numerical example.  more » « less
Award ID(s):
1711188 1544771
NSF-PAR ID:
10106017
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2018 Annual American Control Conference (ACC)
Page Range / eLocation ID:
597 to 603
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep reinforcement learning approaches are becoming appealing for the design of nonlinear controllers for voltage control problems, but the lack of stability guarantees hinders their real-world deployment. This letter constructs a decentralized RL-based controller for inverter-based real-time voltage control in distribution systems. It features two components: a transient control policy and a steady-state performance optimizer. The transient policy is parameterized as a neural network, and the steady-state optimizer represents the gradient of the long-term operating cost function. The two parts are synthesized through a safe gradient flow framework, which prevents the violation of reactive power capacity constraints. We prove that if the output of the transient controller is bounded and monotonically decreasing with respect to its input, then the closed-loop system is asymptotically stable and converges to the optimal steady-state solution. We demonstrate the effectiveness of our method by conducting experiments with IEEE 13-bus and 123-bus distribution system test feeders. 
    more » « less
  2. Recently, there has been renewed interest in data-driven control, that is, the design of controllers directly from observed data. In the case of linear time-invariant (LTI) systems, several approaches have been proposed that lead to tractable optimization problems. On the other hand, the case of nonlinear dynamics is considerably less developed, with existing approaches limited to at most rational dynamics and requiring the solution to a computationally expensive Sum of Squares (SoS) optimization. Since SoS problems typically scale combinatorially with the size of the problem, these approaches are limited to relatively low order systems. In this paper, we propose an alternative, based on the use of state-dependent representations. This idea allows for synthesizing data-driven controllers by solving at each time step an on-line optimization problem whose complexity is comparable to the LTI case. Further, the proposed approach is not limited to rational dynamics. The main result of the paper shows that the feasibility of this on-line optimization problem guarantees that the proposed controller renders the origin a globally asymptotically stable equilibrium point of the closed-loop system. These results are illustrated with some simple examples. The paper concludes by briefly discussing the prospects for adding performance criteria. 
    more » « less
  3. We consider the problem of designing a feedback controller that guides the input and output of a linear time-invariant system to a minimizer of a convex optimization problem. The system is subject to an unknown disturbance that determines the feasible set defined by the system equilibrium constraints. Our proposed design enforces the Karush-Kuhn-Tucker optimality conditions in steady-state without incorporating dual variables into the controller. We prove that the input and output variables achieve optimality in equilibrium and outline two procedures for designing controllers that stabilize the closed-loop system. We explore key ideas through simple examples and simulations. 
    more » « less
  4. N. Matni, M. Morari (Ed.)
    In this paper, we propose a robust reinforcement learning method for a class of linear discrete-time systems to handle model mismatches that may be induced by sim-to-real gap. Under the formulation of risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to iteratively approximate the robust and optimal controller. The convergence and robustness of the dual-loop policy optimization algorithm are rigorously analyzed. It is shown that the dual-loop policy optimization algorithm uniformly converges to the optimal solution. In addition, by invoking the concept of small-disturbance input-to-state stability, it is guaranteed that the dual-loop policy optimization algorithm still converges to a neighborhood of the optimal solution when the algorithm is subject to a sufficiently small disturbance at each step. When the system matrices are unknown, a learning-based off-policy policy optimization algorithm is proposed for the same class of linear systems with additive Gaussian noise. The numerical simulation is implemented to demonstrate the efficacy of the proposed algorithm. 
    more » « less
  5. The minimum-gain eigenvalue assignment/pole placement problem (MGEAP) is a classical problem in LTI systems with static state feedback. In this paper, we study the MGEAP when the state feedback has arbitrary sparsity constraints. We formulate the sparse MGEAP problem as an equality-constrained optimization problem and present an analytical characterization of its locally optimal solution in terms of eigenvector matrices of the closed loop system. This result is used to provide a geometric interpretation of the solution of the non-sparse MGEAP, thereby providing additional insights for this classical problem. Further, we develop an iterative projected gradient descent algorithm to obtain local solutions for the sparse MGEAP using a parametrization based on the Sylvester equation. We present a heuristic algorithm to compute the projections, which also provides a novel method to solve the sparse EAP. Also, a relaxed version of the sparse MGEAP is presented and an algorithm is developed to obtain approximately sparse local solutions to the MGEAP. Finally, numerical studies are presented to compare the properties of the algorithms, which suggest that the proposed projec 
    more » « less