skip to main content

This content will become publicly available on June 1, 2023

Title: Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty
We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.
Authors:
; ; ;
Award ID(s):
1931815
Publication Date:
NSF-PAR ID:
10377027
Journal Name:
American Control Conference
Page Range or eLocation-ID:
906 - 913
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a novel technique for solving the problem of safe control for a general class of nonlinear, control-affine systems subject to parametric model uncertainty. Invoking Lyapunov analysis and the notion of fixed-time stability (FxTS), we introduce a parameter adaptation law which guarantees convergence of the estimates of unknown parameters in the system dynamics to their true values within a fixed-time independent of the initial parameter estimation error. We then synthesize the adaptation law with a robust, adaptive control barrier function (RaCBF) based quadratic program to compute safe control inputs despite the considered model uncertainty. To corroborate our results, we undertake a comparative case study on the efficacy of this result versus other recent approaches in the literature to safe control under uncertainty, and close by highlighting the value of our method in the context of an automobile overtake scenario.
  2. A hybrid filtered basis function (FBF) approach is proposed in this paper for feedforward tracking control of linear systems with unmodeled nonlinear dynamics. Unlike most available tracking control techniques, the FBF approach is very versatile; it is applicable to any type of linear system, regardless of its underlying dynamics. The FBF approach expresses the control input to a system as a linear combination of basis functions with unknown coefficients. The basis functions are forward filtered through a linear model of the system's dynamics and the unknown coefficients are selected such that tracking error is minimized. The linear models used in existing implementations of the FBF approach are typically physics-based representations of the linear dynamics of a system. The proposed hybrid FBF approach expands the application of the FBF approach to systems with unmodeled nonlinearities by learning from data. A hybrid model is formulated by combining a physics-based model of the system's linear dynamics with a data-driven linear model that approximates the unmodeled nonlinear dynamics. The hybrid model is used online in receding horizon to compute optimal control commands that minimize tracking errors. The proposed hybrid FBF approach is shown in simulations on a model of a vibration-prone 3D printer tomore »improve tracking accuracy by up to 65.4%, compared to an existing FBF approach that does not incorporate data.« less
  3. Motivated by connected and automated vehicle (CAV) technologies, this paper proposes a data-driven optimization-based Model Predictive Control (MPC) modeling framework for the Cooperative Adaptive Cruise Control (CACC) of a string of CAVs under uncertain traffic conditions. The proposed data-driven optimization-based MPC modeling framework aims to improve the stability, robustness, and safety of longitudinal cooperative automated driving involving a string of CAVs under uncertain traffic conditions using Vehicle-to-Vehicle (V2V) data. Based on an online learning-based driving dynamics prediction model, we predict the uncertain driving states of the vehicles preceding the controlled CAVs. With the predicted driving states of the preceding vehicles, we solve a constrained Finite-Horizon Optimal Control problem to predict the uncertain driving states of the controlled CAVs. To obtain the optimal acceleration or deceleration commands for the CAVs under uncertainties, we formulate a Distributionally Robust Stochastic Optimization (DRSO) model (i.e. a special case of data-driven optimization models under moment bounds) with a Distributionally Robust Chance Constraint (DRCC). The predicted uncertain driving states of the immediately preceding vehicles and the controlled CAVs will be utilized in the safety constraint and the reference driving states of the DRSO-DRCC model. To solve the minimax program of the DRSO-DRCC model, we reformulate themore »relaxed dual problem as a Semidefinite Program (SDP) of the original DRSO-DRCC model based on the strong duality theory and the Semidefinite Relaxation technique. In addition, we propose two methods for solving the relaxed SDP problem. We use Next Generation Simulation (NGSIM) data to demonstrate the proposed model in numerical experiments. The experimental results and analyses demonstrate that the proposed model can obtain string-stable, robust, and safe longitudinal cooperative automated driving control of CAVs by proper settings, including the driving-dynamics prediction model, prediction horizon lengths, and time headways. Computational analyses are conducted to validate the efficiency of the proposed methods for solving the DRSO-DRCC model for real-time automated driving applications within proper settings.« less
  4. Learning how to effectively control unknown dynamical systems from data is crucial for intelligent autonomous systems. This task becomes a significant challenge when the underlying dynamics are changing with time. Motivated by this challenge, this paper considers the problem of controlling an unknown Markov jump linear system (MJS) to optimize a quadratic objective in a data-driven way. By taking a model-based perspective, we consider identification-based adaptive control for MJS. We first provide a system identification algorithm for MJS to learn the dynamics in each mode as well as the Markov transition matrix, underlying the evolution of the mode switches, from a single trajectory of the system states, inputs, and modes. Through mixing-time arguments, sample complexity of this algorithm is shown to be O(1/T−−√). We then propose an adaptive control scheme that performs system identification together with certainty equivalent control to adapt the controllers in an episodic fashion. Combining our sample complexity results with recent perturbation results for certainty equivalent control, we prove that when the episode lengths are appropriately chosen, the proposed adaptive control scheme achieves O(T−−√) regret. Our proof strategy introduces innovations to handle Markovian jumps and a weaker notion of stability common in MJSs. Our analysis provides insightsmore »into system theoretic quantities that affect learning accuracy and control performance. Numerical simulations are presented to further reinforce these insights.« less
  5. Modern nonlinear control theory seeks to endow systems with properties such as stability and safety, and has been deployed successfully across various domains. Despite this success, model uncertainty remains a significant challenge in ensuring that model-based controllers transfer to real world systems. This paper develops a data-driven approach to robust control synthesis in the presence of model uncertainty using Control Certificate Functions (CCFs), resulting in a convex optimization based controller for achieving properties like stability and safety. An important benefit of our framework is nuanced data-dependent guarantees, which in principle can yield sample-efficient data collection approaches that need not fully determine the input-to-state relationship. This work serves as a starting point for addressing important questions at the intersection of nonlinear control theory and non-parametric learning, both theoretical and in application. We demonstrate the efficiency of the proposed method with respect to input data in simulation with an inverted pendulum in multiple experimental settings.