skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Guaranteed Stability Margins for Decentralized Linear Quadratic Regulators
It is well-known that linear quadratic regulators (LQR) enjoy guaranteed stability margins, whereas linear quadratic Gaussian regulators (LQG) do not. In this letter, we consider systems and compensators defined over directed acyclic graphs. In particular, there are multiple decision-makers, each with access to a different part of the global state. In this setting, the optimal LQR compensator is dynamic, similar to classical LQG. We show that when sub-controller input costs are decoupled (but there is possible coupling between sub-controller state costs), the decentralized LQR compensator enjoys similar guaranteed stability margins to classical LQR. However, these guarantees disappear when cost coupling is introduced.  more » « less
Award ID(s):
2136317
PAR ID:
10418540
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Control Systems Letters
ISSN:
2475-1456
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abate, A; Cannon, M; Margellos, K; Papachristodoulou, A (Ed.)
    We investigate the problem of learning linear quadratic regulators (LQR) in a multi-task, heterogeneous, and model-free setting. We characterize the stability and personalization guarantees of a policy gradient-based (PG) model-agnostic meta-learning (MAML) (Finn et al., 2017) approach for the LQR problem under different task-heterogeneity settings. We show that our MAML-LQR algorithm produces a stabilizing controller close to each task-specific optimal controller up to a task-heterogeneity bias in both model-based and model-free learning scenarios. Moreover, in the model-based setting, we show that such a controller is achieved with a linear convergence rate, which improves upon sub-linear rates from existing work. Our theoretical guarantees demonstrate that the learned controller can efficiently adapt to unseen LQR tasks. 
    more » « less
  2. This paper addresses the end-to-end sample complexity bound for learning the H2 optimal controller (the Linear Quadratic Gaussian (LQG) problem) with unknown dynamics, for potentially unstable Linear Time Invariant (LTI) systems. The robust LQG synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant. The closed-loopi dentification of the nominal model of the true plant is performed by constructing a Hankel-like matrix from a single time-series of noisy finite length input-output data, using the ordinary least squares algorithm from Sarkar and Rakhlin (2019). Next, an H∞ bound on the estimated model error is provided and the robust controller is designed via convex optimization, much in the spirit of Mania et al. (2019) and Zheng et al. (2020b), while allowing for bounded additive uncertainty on the coprime factors of the model. Our conclusions are consistent with previous results on learning the LQG and LQR controllers. 
    more » « less
  3. Closed-loop stability of uncertain linear systems is studied under the state feedback realized by a linear quadratic regulator (LQR). Sufficient conditions are presented that ensure the closed-loop stability in the presence of uncertainty, initially for the case of a non-robust LQR designed for a nominal model not reflecting the system uncertainty. Since these conditions are usually violated for a large uncertainty, a procedure is offered to redesign such a non-robust LQR into a robust one that ensures closed-loop stability under a predefined level of uncertainty. The analysis of this paper largely relies on the concept of inverse optimal control to construct suitable performance measures for uncertain linear systems, which are non-quadratic in structure but yield optimal controls in the form of LQR. The relationship between robust LQR and zero-sum linear quadratic dynamic games is established. 
    more » « less
  4. Common reinforcement learning methods seek optimal controllers for unknown dynamical systems by searching in the "policy" space directly. A recent line of research, starting with [1], aims to provide theoretical guarantees for such direct policy-update methods by exploring their performance in classical control settings, such as the infinite horizon linear quadratic regulator (LQR) problem. A key property these analyses rely on is that the LQR cost function satisfies the "gradient dominance" property with respect to the policy parameters. Gradient dominance helps guarantee that the optimal controller can be found by running gradient-based algorithms on the LQR cost. The gradient dominance property has so far been verified on a case-by-case basis for several control problems including continuous/discrete time LQR, LQR with decentralized controller, H2/H∞ robust control.In this paper, we make a connection between this line of work and classical convex parameterizations based on linear matrix inequalities (LMIs). Using this, we propose a unified framework for showing that gradient dominance indeed holds for a broad class of control problems, such as continuous- and discrete-time LQR, minimizing the L2 gain, and problems using system-level parameterization. Our unified framework provides insights into the landscape of the cost function as a function of the policy, and enables extending convergence results for policy gradient descent to a much larger class of problems. 
    more » « less
  5. Microelectromechanical (MEMS) gyroscopes are small devices used in different industries such as automotive and robotics systems due to their small size and low costs. The MEMS gyroscopes constantly encounter external disturbances, which introduce some mechanical and electromechanical nonlinearity in those systems. In this paper, the Koopman theory is applied to the nonlinear dynamic model of MEMS gyroscope to the linear dynamics model. Dynamic mode decomposition (DMD) is used to obtain eigenfunctions using Koopman’s theory to linearize the system. Then, a linear quadratic regulator (LQR) controller is used to control the MEMS gyroscope. The simulation results verify the performance of the proposed controller in terms of high-tracking performance. 
    more » « less