skip to main content

Title: Scalable Robust Adaptive Control from the System Level Perspective
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis [1]. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communication more » and implementation constraints. Moreover, although current theoretical results require the assumption of small initial uncertainties to guarantee robustness, we will present simulations that show good closed-loop performance even in the case of large uncertainties, which suggests that this assumption is not critical for the presented technique and future work will focus on providing less conservative guarantees. « less
Authors:
;
Award ID(s):
1735003
Publication Date:
NSF-PAR ID:
10155684
Journal Name:
2019 American Control Conference (ACC)
Page Range or eLocation-ID:
3683 to 3688
Sponsoring Org:
National Science Foundation
More Like this
  1. Liu, Tengfei ; Ou, Yan. (Ed.)
    We design a regulation-triggered adaptive controller for robot manipulators to efficiently estimate unknown parameters and to achieve asymptotic stability in the presence of coupled uncertainties. Robot manipulators are widely used in telemanipulation systems where they are subject to model and environmental uncertainties. Using conventional control algorithms on such systems can cause not only poor control performance, but also expensive computational costs and catastrophic instabilities. Therefore, system uncertainties need to be estimated through designing a computationally efficient adaptive control law. We focus on robot manipulators as an example of a highly nonlinear system. As a case study, a 2-DOF manipulator subject to four parametric uncertainties is investigated. First, the dynamic equations of the manipulator are derived, and the corresponding regressor matrix is constructed for the unknown parameters. For a general nonlinear system, a theorem is presented to guarantee the asymptotic stability of the system and the convergence of parameters’ estimations. Finally, simulation results are discussed for a two-link manipulator, and the performance of the proposed scheme is thoroughly evaluated.
  2. We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.
  3. This paper presents a novel decentralized control strategy for a class of uncertain nonlinear large-scale systems with mismatched interconnections. First, it is shown that the decentralized controller for the overall system can be represented by an array of optimal control policies of auxiliary subsystems. Then, within the framework of adaptive dynamic programming, a simultaneous policy iteration (SPI) algorithm is developed to solve the Hamilton–Jacobi–Bellman equations associated with auxiliary subsystem optimal control policies. The convergence of the SPI algorithm is guaranteed by an equivalence relationship. To implement the present SPI algorithm, actor and critic neural networks are applied to approximate the optimal control policies and the optimal value functions, respectively. Meanwhile, both the least squares method and the Monte Carlo integration technique are employed to derive the unknown weight parameters. Furthermore, by using Lyapunov’s direct method, the overall system with the obtained decentralized controller is proved to be asymptotically stable. Finally, the effectiveness of the proposed decentralized control scheme is illustrated via simulations for nonlinear plants and unstable power systems.
  4. We consider a large-scale service system where incoming tasks have to be instantaneously dispatched to one out of many parallel server pools. The user-perceived performance degrades with the number of concurrent tasks and the dispatcher aims at maximizing the overall quality of service by balancing the load through a simple threshold policy. We demonstrate that such a policy is optimal on the fluid and diffusion scales, while only involving a small communication overhead, which is crucial for large-scale deployments. In order to set the threshold optimally, it is important, however, to learn the load of the system, which may be unknown. For that purpose, we design a control rule for tuning the threshold in an online manner. We derive conditions that guarantee that this adaptive threshold settles at the optimal value, along with estimates for the time until this happens. In addition, we provide numerical experiments that support the theoretical results and further indicate that our policy copes effectively with time-varying demand patterns. Summary of Contribution: Data centers and cloud computing platforms are the digital factories of the world, and managing resources and workloads in these systems involves operations research challenges of an unprecedented scale. Due to the massive size,more »complex dynamics, and wide range of time scales, the design and implementation of optimal resource-allocation strategies is prohibitively demanding from a computation and communication perspective. These resource-allocation strategies are essential for certain interactive applications, for which the available computing resources need to be distributed optimally among users in order to provide the best overall experienced performance. This is the subject of the present article, which considers the problem of distributing tasks among the various server pools of a large-scale service system, with the objective of optimizing the overall quality of service provided to users. A solution to this load-balancing problem cannot rely on maintaining complete state information at the gateway of the system, since this is computationally unfeasible, due to the magnitude and complexity of modern data centers and cloud computing platforms. Therefore, we examine a computationally light load-balancing algorithm that is yet asymptotically optimal in a regime where the size of the system approaches infinity. The analysis is based on a Markovian stochastic model, which is studied through fluid and diffusion limits in the aforementioned large-scale regime. The article analyzes the load-balancing algorithm theoretically and provides numerical experiments that support and extend the theoretical results.« less
  5. Abstract This paper presents a hierarchical nonlinear control algorithm for the real-time planning and control of cooperative locomotion of legged robots that collaboratively carry objects. An innovative network of reduced-order models subject to holonomic constraints, referred to as interconnected linear inverted pendulum (LIP) dynamics, is presented to study cooperative locomotion. The higher level of the proposed algorithm employs a supervisory controller, based on event-based model predictive control (MPC), to effectively compute the optimal reduced-order trajectories for the interconnected LIP dynamics. The lower level of the proposed algorithm employs distributed nonlinear controllers to reduce the gap between reduced- and full-order complex models of cooperative locomotion. In particular, the distributed controllers are developed based on quadratic programing (QP) and virtual constraints to impose the full-order dynamical models of each agent to asymptotically track the reduced-order trajectories while having feasible contact forces at the leg ends. The paper numerically investigates the effectiveness of the proposed control algorithm via full-order simulations of a team of collaborative quadrupedal robots, each with a total of 22 degrees-of-freedom. The paper finally investigates the robustness of the proposed control algorithm against uncertainties in the payload mass and changes in the ground height profile. Numerical studies show that themore »cooperative agents can transport unknown payloads whose masses are up to 57%, 97%, and 137% of a single agent's mass with a team of two, three, and four legged robots.« less