skip to main content


Title: Soft‐constrained model predictive control based on data‐driven distributionally robust optimization
Abstract

This article proposes a novel distributionally robust optimization (DRO)‐based soft‐constrained model predictive control (MPC) framework to explicitly hedge against unknown external input terms in a linear state‐space system. Without a priori knowledge of the exact uncertainty distribution, this framework works with a lifted ambiguity set constructed using machine learning to incorporate the first‐order moment information. By adopting a linear performance measure and considering input and state constraints robustly with respect to a lifted support set, the DRO‐based MPC is reformulated as a robust optimization problem. The constraints are softened to ensure recursive feasibility. Theoretical results on optimality, feasibility, and stability are further discussed. Performance and computational efficiency of the proposed method are illustrated through motion control and building energy control systems, showing 18.3% less cost and 78.8% less constraint violations, respectively, while requiring one third of the CPU time compared to multi‐stage scenario based stochastic MPC.

 
more » « less
NSF-PAR ID:
10452087
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
AIChE Journal
Volume:
66
Issue:
10
ISSN:
0001-1541
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Model predictive control (MPC) provides a useful means for controlling systems with constraints, but suffers from the computational burden of repeatedly solving an optimization problem in real time. Offline (explicit) solutions for MPC attempt to alleviate real time computational challenges using either multiparametric programming or machine learning. The multiparametric approaches are typically applied to linear or quadratic MPC problems, while learning-based approaches can be more flexible and are less memory-intensive. Existing learning-based approaches offer significant speedups, but the challenge becomes ensuring constraint satisfaction while maintaining good performance. In this paper, we provide a neural network parameterization of MPC policies that explicitly encodes the constraints of the problem. By exploring the interior of the MPC feasible set in an unsupervised learning paradigm, the neural network finds better policies faster than projection-based methods and exhibits substantially shorter solve times. We use the proposed policy to solve a robust MPC problem, and demonstrate the performance and computational gains on a standard test system. 
    more » « less
  2. null (Ed.)
    For energy-efficient Connected and Automated Vehicle (CAV) Eco-driving control on signalized arterials under uncertain traffic conditions, this paper explicitly considers traffic control devices (e.g., road markings, traffic signs, and traffic signals) and road geometry (e.g., road shapes, road boundaries, and road grades) constraints in a data-driven optimization-based Model Predictive Control (MPC) modeling framework. This modeling framework uses real-time vehicle driving and traffic signal data via Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications. In the MPC-based control model, this paper mathematically formulates location-based traffic control devices and road geometry constraints using the geographic information from High-Definition (HD) maps. The location-based traffic control devices and road geometry constraints have the potential to improve the safety, energy, efficiency, driving comfort, and robustness of connected and automated driving on real roads by considering interrupted flow facility locations and road geometry in the formulation. We predict a set of uncertain driving states for the preceding vehicles through an online learning-based driving dynamics prediction model. We then solve a constrained finite-horizon optimal control problem with the predicted driving states to obtain a set of Eco-driving references for the controlled vehicle. To obtain the optimal acceleration or deceleration commands for the controlled vehicle with the set of Eco-driving references, we formulate a Distributionally Robust Stochastic Optimization (DRSO) model (i.e., a special case of data-driven optimization models under moment bounds) with Distributionally Robust Chance Constraints (DRCC) with location-based traffic control devices and road geometry constraints. We design experiments to demonstrate the proposed model under different traffic conditions using real-world connected vehicle trajectory data and Signal Phasing and Timing (SPaT) data on a coordinated arterial with six actuated intersections on Fuller Road in Ann Arbor, Michigan from the Safety Pilot Model Deployment (SPMD) project. 
    more » « less
  3. Summary

    Scenario‐based model predictive control (MPC) methods can mitigate the conservativeness inherent to open‐loop robust MPC. Yet, the scenarios are often generated offline based on worst‐case uncertainty descriptions obtaineda priori, which can in turn limit the improvements in the robust control performance. To this end, this paper presents a learning‐based, adaptive‐scenario‐tree model predictive control approach for uncertain nonlinear systems with time‐varying and/or hard‐to‐model dynamics. Bayesian neural networks (BNNs) are used to learn a state‐ and input‐dependent description of model uncertainty, namely the mismatch between a nominal (physics‐based or data‐driven) model of a system and its actual dynamics. We first present a new approach for training robust BNNs (RBNNs) using probabilistic Lipschitz bounds to provide a less conservative uncertainty quantification. Then, we present an approach to evaluate the credible intervals of RBNN predictions and determine the number of samples required for estimating the credible intervals given a credible level. The performance of RBNNs is evaluated with respect to that of standard BNNs and Gaussian process (GP) as a basis of comparison. The RBNN description of plant‐model mismatch with verified accurate credible intervals is employed to generate adaptive scenarios online for scenario‐based MPC (sMPC). The proposed sMPC approach with adaptive scenario tree can improve the robust control performance with respect to sMPC with a fixed, worst‐case scenario tree and with respect to an adaptive‐scenario‐based MPC (asMPC) using GP regression on a cold atmospheric plasma system. Furthermore, closed‐loop simulation results illustrate that robust model uncertainty learning via RBNNs can enhance the probability of constraint satisfaction of asMPC.

     
    more » « less
  4. Abstract

    This article focuses on the development of distributed robust model predictive control (MPC) methods for multiple connected and automated vehicles (CAVs) to ensure their safe operation in the presence of uncertainty. The proposed layered control framework includes reference trajectory generation, distributionally robust obstacle occupancy set computation, distributed state constraint set evaluation, data-driven linear model representation, and robust tube-based MPC design. To enable distributed operation among the CAVs, we present a method, which exploits sampling-based reference trajectory generation and distributed constraint set evaluation methods, that decouples the coupled collision avoidance constraint among the CAVs. This is followed by data-driven linear model representation of the nonlinear system to evaluate the convex equivalent of the nonlinear control problem. Finally, to ensure safe operation in the presence of uncertainty, this article employs a robust tube-based MPC method. For a multiple CAV lane change problem, simulation results show the efficacy of the proposed controller in terms of computational efficiency and the ability to generate safe and smooth CAV trajectories in a distributed fashion.

     
    more » « less
  5. We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods. 
    more » « less