skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Discrete-Time Distributed Optimization for Linear Uncertain Multi-Agent Systems
The distributed optimization algorithm proposed by J. Wang and N. Elia in 2010 has been shown to achieve linear convergence for multi-agent systems with single-integrator dynamics. This paper extends their result, including the linear convergence rate, to a more complex scenario where the agents have heterogeneous multi-input multi-output linear dynamics and are subject to external disturbances and parametric uncertainties. Disturbances are dealt with via an internal-modelbased control design, and the interaction among the tracking error dynamics, average dynamics, and dispersion dynamics is analyzed through a composite Lyapunov function and the cyclic small-gain theorem. The key is to ensure a small enough stepsize for the convergence of the proposed algorithm, which is similar to the condition for time-scale separation in singular perturbation theory.  more » « less
Award ID(s):
2210320
PAR ID:
10511912
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
2023 62nd IEEE Conference on Decision and Control
ISBN:
979-8-3503-0124-3
Page Range / eLocation ID:
7439 to 7444
Subject(s) / Keyword(s):
Distributed Optimization Multi-agent systems
Format(s):
Medium: X
Location:
Singapore, Singapore
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper proposes a novel robust reinforcement learning framework for discrete-time linear systems with model mismatch that may arise from the sim-to-real gap. A key strategy is to invoke advanced techniques from control theory. Using the formulation of the classical risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to generate a robust optimal controller. The dual-loop policy optimization algorithm is shown to be globally and uniformly convergent, and robust against disturbances during the learning process. This robustness property is called small-disturbance input-to-state stability and guarantees that the proposed policy optimization algorithm converges to a small neighborhood of the optimal controller as long as the disturbance at each learning step is relatively small. In addition, when the system dynamics is unknown, a novel model-free off-policy policy optimization algorithm is proposed. Finally, numerical examples are provided to illustrate the proposed algorithm. 
    more » « less
  2. We consider a class of nonsmooth convex composite optimization problems, where the objective function is given by the sum of a continuously differentiable convex term and a potentially non-differentiable convex regularizer. In [1], the authors introduced the proximal augmented Lagrangian method and derived the resulting continuous-time primal-dual dynamics that converge to the optimal solution. In this paper, we extend these dynamics from continuous to discrete time via the forward Euler discretization. We prove explicit bounds on the exponential convergence rates of our proposed algorithm with a sufficiently small step size. Since a larger step size can improve the convergence speed, we further develop a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices. In addition, we prove that a large range of step size values can guarantee exponential convergence. We close the paper by demonstrating the performance of the proposed algorithm via computational experiments. 
    more » « less
  3. This article presents an extended state observer for a vehicle modeled as a rigid body in three-dimensional translational and rotational motions. The extended state observer is applicable to a multi-rotor aerial vehicle with a fixed plane of rotors, modeled as an under-actuated system on the state-space TSE(3), the tangent bundle of the six-dimensional Lie group SE(3). This state-space representation globally represents rigid body motions without singularities. The extended state observer is designed to estimate the resultant external disturbance force and disturbance torque acting on the vehicle. It guarantees stable convergence of disturbance estimation errors in finite time when the disturbances are constant, and finite time convergence to a bounded neighborhood of zero errors for time-varying disturbances. This extended state observer design is based on a Hölder-continuous fast finite time stable differentiator that is similar to the super-twisting algorithm, to obtain fast convergence. Numerical simulations are conducted to validate the proposed extended state observer. The proposed extended state observer is compared with other existing research to show its advantages. A set of experimental results implementing disturbance rejection control using feedback of disturbance estimates from this extended state observer is also presented. 
    more » « less
  4. N. Matni, M. Morari (Ed.)
    In this paper, we propose a robust reinforcement learning method for a class of linear discrete-time systems to handle model mismatches that may be induced by sim-to-real gap. Under the formulation of risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to iteratively approximate the robust and optimal controller. The convergence and robustness of the dual-loop policy optimization algorithm are rigorously analyzed. It is shown that the dual-loop policy optimization algorithm uniformly converges to the optimal solution. In addition, by invoking the concept of small-disturbance input-to-state stability, it is guaranteed that the dual-loop policy optimization algorithm still converges to a neighborhood of the optimal solution when the algorithm is subject to a sufficiently small disturbance at each step. When the system matrices are unknown, a learning-based off-policy policy optimization algorithm is proposed for the same class of linear systems with additive Gaussian noise. The numerical simulation is implemented to demonstrate the efficacy of the proposed algorithm. 
    more » « less
  5. Matni, N; Morari, M; Pappas, G J (Ed.)
    In this paper, we propose a robust reinforcement learning method for a class of linear discrete-time systems to handle model mismatches that may be induced by sim-to-real gap. Under the formulation of risk-sensitive linear quadratic Gaussian control, a dual-loop policy optimization algorithm is proposed to iteratively approximate the robust and optimal controller. The convergence and robustness of the dual-loop policy optimization algorithm are rigorously analyzed. It is shown that the dual-loop policy optimization algorithm uniformly converges to the optimal solution. In addition, by invoking the concept of small-disturbance input-to-state stability, it is guaranteed that the dual-loop policy optimization algorithm still converges to a neighborhood of the optimal solution when the algorithm is subject to a sufficiently small disturbance at each step. When the system matrices are unknown, a learning-based off-policy policy optimization algorithm is proposed for the same class of linear systems with additive Gaussian noise. The numerical simulation is implemented to demonstrate the efficacy of the proposed algorithm. 
    more » « less