skip to main content

This content will become publicly available on May 31, 2024

Title: On Design of Robust Linear Quadratic Regulators
Closed-loop stability of uncertain linear systems is studied under the state feedback realized by a linear quadratic regulator (LQR). Sufficient conditions are presented that ensure the closed-loop stability in the presence of uncertainty, initially for the case of a non-robust LQR designed for a nominal model not reflecting the system uncertainty. Since these conditions are usually violated for a large uncertainty, a procedure is offered to redesign such a non-robust LQR into a robust one that ensures closed-loop stability under a predefined level of uncertainty. The analysis of this paper largely relies on the concept of inverse optimal control to construct suitable performance measures for uncertain linear systems, which are non-quadratic in structure but yield optimal controls in the form of LQR. The relationship between robust LQR and zero-sum linear quadratic dynamic games is established.  more » « less
Award ID(s):
Author(s) / Creator(s):
Publisher / Repository:
Date Published:
Page Range / eLocation ID:
3833 - 3838
Medium: X
San Diego, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper addresses the problem of hybrid control for a class of switched uncertain systems. The switched system under consideration is subject to structured uncertain dynamics in a linear fractional transformation (LFT) form and time-varying input delays. A novel hybrid controller is proposed, which consists of three major components: the integral quadratic constraint (IQC) dynamics, the continuous dynamics, and the jump dynamics. The IQC dynamics are developed by leveraging methodologies from robust control theory and are utilised to address the effects of time-varying input delays. The continuous dynamics are structured by feeding back not only measurement outputs but also some system's internal signals. The jump dynamics enforce a jump (update/reset) at every switching time instant for the states of both IQC dynamics and continuous dynamics. Based on this, robust stability of the overall hybrid closed-loop system is established under the average dwell time framework with multiple Lyapunov functions. Moreover, the associated control synthesis conditions are fully characterised as linear matrix inequalities, which can be solved efficiently. An application example on regulation of a nonlinear switched electronic circuit system has been used to demonstrate effectiveness and usefulness of the proposed approach. 
    more » « less
  2. This paper addresses the end-to-end sample complexity bound for learning the H2 optimal controller (the Linear Quadratic Gaussian (LQG) problem) with unknown dynamics, for potentially unstable Linear Time Invariant (LTI) systems. The robust LQG synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant. The closed-loopi dentification of the nominal model of the true plant is performed by constructing a Hankel-like matrix from a single time-series of noisy finite length input-output data, using the ordinary least squares algorithm from Sarkar and Rakhlin (2019). Next, an H∞ bound on the estimated model error is provided and the robust controller is designed via convex optimization, much in the spirit of Mania et al. (2019) and Zheng et al. (2020b), while allowing for bounded additive uncertainty on the coprime factors of the model. Our conclusions are consistent with previous results on learning the LQG and LQR controllers. 
    more » « less
  3. Achieving optimal steady-state performance in real-time is an increasingly necessary requirement of many critical infrastructure systems. In pursuit of this goal, this paper builds a systematic design framework of feedback controllers for Linear Time-Invariant (LTI) systems that continuously track the optimal solution of some predefined optimization problem. We logically divide the proposed solution into three components. The first component estimates the system state from the output measurements. The second component uses the estimated state and computes a drift direction based on an optimization algorithm. The third component calculates an input to the LTI system that aims to drive the system toward the optimal steady-state. We analyze the equilibrium characteristics of the closed-loop system and provide conditions for optimality and stability. Our analysis shows that the proposed solution guarantees optimal steady-state performance, even in the presence of constant disturbances. Furthermore, by leveraging recent results on the analysis of optimization algorithms using Integral Quadratic Constraints (IQCs), the proposed framework can translate input-output properties of our optimization component into sufficient conditions, based on linear matrix inequalities (LMIs), for global exponential asymptotic stability of the closed-loop system. We illustrate several resulting controller designs using a numerical example. 
    more » « less
  4. N. Matni, M. Morari (Ed.)
    In this paper, we propose a robust reinforcement learning method for a class of linear discrete-time systems to handle model mismatches that may be induced by sim-to-real gap. Under the formulation of risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to iteratively approximate the robust and optimal controller. The convergence and robustness of the dual-loop policy optimization algorithm are rigorously analyzed. It is shown that the dual-loop policy optimization algorithm uniformly converges to the optimal solution. In addition, by invoking the concept of small-disturbance input-to-state stability, it is guaranteed that the dual-loop policy optimization algorithm still converges to a neighborhood of the optimal solution when the algorithm is subject to a sufficiently small disturbance at each step. When the system matrices are unknown, a learning-based off-policy policy optimization algorithm is proposed for the same class of linear systems with additive Gaussian noise. The numerical simulation is implemented to demonstrate the efficacy of the proposed algorithm. 
    more » « less
  5. This paper presents a unified approach to the problem of learning-based optimal control of connected human-driven and autonomous vehicles in mixed-traffic environments including both the freeway and ring road settings. The stabilizability of a string of connected vehicles including multiple autonomous vehicles (AVs) and heterogeneous human-driven vehicles (HDVs) is studied by a model reduction technique and the Popov-Belevitch-Hautus (PBH) test. For this problem setup, a linear quadratic regulator (LQR) problem is formulated and a solution based on adaptive dynamic programming (ADP) techniques is proposed without a priori knowledge on model parameters. To start the learning process, an initial stabilizing control law is obtained using the small-gain theorem for the ring road case. It is shown that the obtained stabilizing control law can achieve general Lp string stability under appropriate conditions. Besides, to minimize the impact of external disturbance, a linear quadratic zero-sum game is introduced and solved by an iterative learning-based algorithm. Finally, the simulation results verify the theoretical analysis and the proposed methods achieve desirable performance for control of a mixed-vehicular network. 
    more » « less