skip to main content


Title: Adaptive model predictive control of nonlinear systems with state‐dependent uncertainties
Summary

This paper studies adaptive model predictive control (AMPC) of systems with time‐varying and potentially state‐dependent uncertainties. We propose an estimation and prediction architecture within the min‐max MPC framework. An adaptive estimator is presented to estimate the set‐valued measures of the uncertainty using piecewise constant adaptive law, which can be arbitrarily accurate if the sampling period in adaptation is small enough. Based on such measures, a prediction scheme is provided that predicts the time‐varying feasible set of the uncertainty over the prediction horizon. We show that if the uncertainty and its first derivatives are locally Lipschitz, the stability of the system with AMPC can always be guaranteed under the standard assumptions for traditional min‐max MPC approaches, while the AMPC algorithm enhances the control performance by efficiently reducing the size of the feasible set of the uncertainty in min‐max MPC setting. Copyright © 2017 John Wiley & Sons, Ltd.

 
more » « less
NSF-PAR ID:
10034678
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
International Journal of Robust and Nonlinear Control
Volume:
27
Issue:
17
ISSN:
1049-8923
Page Range / eLocation ID:
p. 4138-4153
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    Scenario‐based model predictive control (MPC) methods can mitigate the conservativeness inherent to open‐loop robust MPC. Yet, the scenarios are often generated offline based on worst‐case uncertainty descriptions obtaineda priori, which can in turn limit the improvements in the robust control performance. To this end, this paper presents a learning‐based, adaptive‐scenario‐tree model predictive control approach for uncertain nonlinear systems with time‐varying and/or hard‐to‐model dynamics. Bayesian neural networks (BNNs) are used to learn a state‐ and input‐dependent description of model uncertainty, namely the mismatch between a nominal (physics‐based or data‐driven) model of a system and its actual dynamics. We first present a new approach for training robust BNNs (RBNNs) using probabilistic Lipschitz bounds to provide a less conservative uncertainty quantification. Then, we present an approach to evaluate the credible intervals of RBNN predictions and determine the number of samples required for estimating the credible intervals given a credible level. The performance of RBNNs is evaluated with respect to that of standard BNNs and Gaussian process (GP) as a basis of comparison. The RBNN description of plant‐model mismatch with verified accurate credible intervals is employed to generate adaptive scenarios online for scenario‐based MPC (sMPC). The proposed sMPC approach with adaptive scenario tree can improve the robust control performance with respect to sMPC with a fixed, worst‐case scenario tree and with respect to an adaptive‐scenario‐based MPC (asMPC) using GP regression on a cold atmospheric plasma system. Furthermore, closed‐loop simulation results illustrate that robust model uncertainty learning via RBNNs can enhance the probability of constraint satisfaction of asMPC.

     
    more » « less
  2. This paper introduces a supervisory unit, called the stability governor (SG), that provides improved guarantees of stability for constrained linear systems under Model Predictive Control (MPC) without terminal constraints. At each time step, the SG alters the setpoint command supplied to the MPC problem so that the current state is guaranteed to be inside of the region of attraction for an auxiliary equilibrium point. The proposed strategy is shown to be recursively feasible and asymptotically stabilizing for all initial states sufficiently close to any equilibrium of the system. Thus, asymptotic stability of the target equilibrium can be guaranteed for a large set of initial states even when a short prediction horizon is used. A numerical example demonstrates that the stability governed MPC strategy can recover closed-loop stability in a scenario where a standard MPC implementation without terminal constraints leads to divergent trajectories. 
    more » « less
  3. A finite horizon nonlinear optimal control problem is considered for which the associated Hamiltonian satisfies a uniform semiconcavity property with respect to its state and costate variables. It is shown that the value function for this optimal control problem is equivalent to the value of a min-max game, provided the time horizon considered is sufficiently short. This further reduces to maximization of a linear functional over a convex set. It is further proposed that the min-max game can be relaxed to a type of stat (stationary) game, in which no time horizon constraint is involved. 
    more » « less
  4. null (Ed.)
    Abstract An autonomous adaptive model predictive control (MPC) architecture is presented for control of heating, ventilation, and air condition (HVAC) systems to maintain indoor temperature while reducing energy use. Although equipment use and occupant changes with time, existing MPC methods are not capable of automatically relearning models and computing control decisions reliably for extended periods without intervention from a human expert. We seek to address this weakness. Two major features are embedded in the proposed architecture to enable autonomy: (i) a system identification algorithm from our prior work that periodically re-learns building dynamics and unmeasured internal heat loads from data without requiring re-tuning by experts. The estimated model is guaranteed to be stable and has desirable physical properties irrespective of the data; (ii) an MPC planner with a convex approximation of the original nonconvex problem. The planner uses a descent and convergent method, with the underlying optimization problem being feasible and convex. A yearlong simulation with a realistic plant shows that both of the features of the proposed architecture—periodic model and disturbance update and convexification of the planning problem—are essential to get performance improvement over a commonly used baseline controller. Without these features, long-term energy savings from MPC can be small while with them, the savings from MPC become substantial. 
    more » « less
  5. Model predictive control (MPC) provides a useful means for controlling systems with constraints, but suffers from the computational burden of repeatedly solving an optimization problem in real time. Offline (explicit) solutions for MPC attempt to alleviate real time computational challenges using either multiparametric programming or machine learning. The multiparametric approaches are typically applied to linear or quadratic MPC problems, while learning-based approaches can be more flexible and are less memory-intensive. Existing learning-based approaches offer significant speedups, but the challenge becomes ensuring constraint satisfaction while maintaining good performance. In this paper, we provide a neural network parameterization of MPC policies that explicitly encodes the constraints of the problem. By exploring the interior of the MPC feasible set in an unsupervised learning paradigm, the neural network finds better policies faster than projection-based methods and exhibits substantially shorter solve times. We use the proposed policy to solve a robust MPC problem, and demonstrate the performance and computational gains on a standard test system. 
    more » « less