skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning‐based adaptive‐scenario‐tree model predictive control with improved probabilistic safety using robust Bayesian neural networks
Summary Scenario‐based model predictive control (MPC) methods can mitigate the conservativeness inherent to open‐loop robust MPC. Yet, the scenarios are often generated offline based on worst‐case uncertainty descriptions obtaineda priori, which can in turn limit the improvements in the robust control performance. To this end, this paper presents a learning‐based, adaptive‐scenario‐tree model predictive control approach for uncertain nonlinear systems with time‐varying and/or hard‐to‐model dynamics. Bayesian neural networks (BNNs) are used to learn a state‐ and input‐dependent description of model uncertainty, namely the mismatch between a nominal (physics‐based or data‐driven) model of a system and its actual dynamics. We first present a new approach for training robust BNNs (RBNNs) using probabilistic Lipschitz bounds to provide a less conservative uncertainty quantification. Then, we present an approach to evaluate the credible intervals of RBNN predictions and determine the number of samples required for estimating the credible intervals given a credible level. The performance of RBNNs is evaluated with respect to that of standard BNNs and Gaussian process (GP) as a basis of comparison. The RBNN description of plant‐model mismatch with verified accurate credible intervals is employed to generate adaptive scenarios online for scenario‐based MPC (sMPC). The proposed sMPC approach with adaptive scenario tree can improve the robust control performance with respect to sMPC with a fixed, worst‐case scenario tree and with respect to an adaptive‐scenario‐based MPC (asMPC) using GP regression on a cold atmospheric plasma system. Furthermore, closed‐loop simulation results illustrate that robust model uncertainty learning via RBNNs can enhance the probability of constraint satisfaction of asMPC.  more » « less
Award ID(s):
2302219
PAR ID:
10442867
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
International Journal of Robust and Nonlinear Control
Volume:
33
Issue:
5
ISSN:
1049-8923
Page Range / eLocation ID:
p. 3312-3333
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper introduces a computationally efficient approach for solving Model Predictive Control (MPC) reference tracking problems with state and control constraints. The approach consists of three key components: First, a log-domain interior-point quadratic programming method that forms the basis of the overall approach; second, a method of warm-starting this optimizer by using the MPC solution from the previous timestep; and third, a computational governor that bounds the suboptimality of the warm-start by altering the reference command provided to the MPC problem. As a result, the closed-loop system is altered in a manner so that MPC solutions can be computed using fewer optimizer iterations per timestep. In a numerical experiment, the computational governor reduces the worst-case computation time of a standard MPC implementation by 90%, while maintaining good closed-loop performance. 
    more » « less
  2. null (Ed.)
    This paper presents a framework to refine identified artificial neural networks (ANN) based state-space linear parametervarying (LPV-SS) models with closed-loop data using online transfer learning. An LPV-SS model is assumed to be first identified offline using inputs/outputs data and a model predictive controller (MPC) designed based on this model. Using collected closed-loop batch data, the model is further refined using online transfer learning and thus the control performance is improved. Specifically, fine-tuning, a transfer learning technique, is employed to improve the model. Furthermore, the scenario where the offline identified model and the online controlled system are “similar but not identitical” is discussed. The proposed method is verified by testing on an experimentally validated high-fidelity reactivity controlled compression ignition (RCCI) engine model. The verification results show that the new online transfer learning technique combined with an adaptive MPC law improves the engine control performance to track requested engine loads and desired combustion phasing with minimum errors. 
    more » « less
  3. We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods. 
    more » « less
  4. null (Ed.)
    Abstract This paper presents a framework to refine identified artificial neural networks (ANN) based state-space linear parameter-varying (LPV-SS) models with closed-loop data using online transfer learning. An LPV-SS model is assumed to be first identified offline using inputs/outputs data and a model predictive controller (MPC) designed based on this model. Using collected closed-loop batch data, the model is further refined using online transfer learning and thus the control performance is improved. Specifically, fine-tuning, a transfer learning technique, is employed to improve the model. Furthermore, the scenario where the offline identified model and the online controlled system are “similar but not identitical” is discussed. The proposed method is verified by testing on an experimentally validated high-fidelity reactivity controlled compression ignition (RCCI) engine model. The verification results show that the new online transfer learning technique combined with an adaptive MPC law improves the engine control performance to track requested engine loads and desired combustion phasing with minimum errors. 
    more » « less
  5. This paper introduces an approach for reducing the computational cost of implementing Linear Quadratic Model Predictive Control (MPC) for set-point tracking subject to pointwise-in-time state and control constraints. The approach consists of three key components: First, a log-domain interior-point method used to solve the receding horizon optimal control problems; second, a method of warm-starting this optimizer by using the MPC solution from the previous timestep; and third, a computational governor that maintains feasibility and bounds the suboptimality of the warm-start by altering the reference command provided to the MPC problem. Theoretical guarantees regarding the recursive feasibility of the MPC problem, asymptotic stability of the target equilibrium, and finite-time convergence of the reference signal are provided for the resulting closed-loop system. In a numerical experiment on a lateral vehicle dynamics model, the worst-case execution time of a standard MPC implementation is reduced by over a factor of 10 when the computational governor is added to the closed-loop system. 
    more » « less