skip to main content

This content will become publicly available on June 8, 2023

Title: Risk-Aware Model Predictive Control Enabled by Bayesian Learning
The performance of a model predictive controller depends on the accuracy of the objective and prediction model of the system. Although significant efforts have been dedicated to improving the robustness of model predictive control (MPC), they typically do not take a risk-averse perspective. In this paper, we propose a risk-aware MPC framework, which estimates the underlying parameter distribution using online Bayesian learning and derives a risk-aware control policy by reformulating classical MPC problems as Bayesian Risk Optimization (BRO) problems. The consistency of the Bayesian estimator and the convergence of the control policy are rigorously proved. Furthermore, we investigate the consistency requirement and propose a risk monitoring mechanism to guarantee the satisfaction of the consistency requirement. Simulation results demonstrate the effectiveness of the proposed approach.
Authors:
; ; ;
Award ID(s):
1828678 1849228 1934836
Publication Date:
NSF-PAR ID:
10359107
Journal Name:
Proceedings of 2022 American Control Conference
Page Range or eLocation-ID:
108 to 113
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper reports on developing an integrated framework for safety-aware informative motion planning suitable for legged robots. The information-gathering planner takes a dense stochastic map of the environment into account, while safety constraints are enforced via Control Barrier Functions (CBFs). The planner is based on the Incrementally-exploring Information Gathering (IIG) algorithm and allows closed-loop kinodynamic node expansion using a Model Predictive Control (MPC) formalism. Robotic exploration and information gathering problems are inherently path-dependent problems. That is, the information collected along a path depends on the state and observation history. As such, motion planning solely based on a modular cost does not lead to suitable plans for exploration. We propose SAFE-IIG, an integrated informative motion planning algorithm that takes into account: 1) a robot’s perceptual field of view via a submodular information function computed over a stochastic map of the environment, 2) a robot’s dynamics and safety constraints via discrete-time CBFs and MPC for closedloop multi-horizon node expansions, and 3) an automatic stopping criterion via setting an information-theoretic planning horizon. The simulation results show that SAFE-IIG can plan a safe and dynamically feasible path while exploring a dense map.
  2. Training self-driving systems to be robust to the long-tail of driving scenarios is a critical problem. Model-based approaches leverage simulation to emulate a wide range of scenarios without putting users at risk in the real world. One promising path to faithful simulation is to train a forward model of the world to predict the future states of both the environment and the ego-vehicle given past states and a sequence of actions. In this paper, we argue that it is beneficial to model the state of the ego-vehicle, which often has simple, predictable and deterministic behavior, separately from the rest of the environment, which is much more complex and highly multimodal. We propose to model the ego-vehicle using a simple and differentiable kinematic model, while training a stochastic convolutional forward model on raster representations of the state to predict the behavior of the rest of the environment. We explore several configurations of such decoupled models, and evaluate their performance both with Model Predictive Control (MPC) and direct policy learning. We test our methods on the task of highway driving and demonstrate lower crash rates and better stability. The code is available at https://github.com/vladisai/pytorch-PPUU/tree/ICLR2022.
  3. The active control of stormwater systems is a potential solution to increased street flooding in low-lying, low-relief coastal cities due to climate change and accompanying sea level rise. Model predictive control (MPC) has been shown to be a successful control strategy generally and as well as for managing urban drainage specifically. This research describes and demonstrates the implementation of MPC for urban drainage systems using open source software (Python and The United States Environmental Protection Agency (EPA) Storm Water Management Model (SWMM5). The system was demonstrated using a simplified use case in which an actively-controlled outlet of a detention pond is simulated. The control of the pond’s outlet influences the flood risk of a downstream node. For each step in the SWMM5 model, a series of policies for controlling the outlet are evaluated. The best policy is then selected using an evolutionary algorithm. The policies are evaluated against an objective function that penalizes primarily flooding and secondarily deviation of the detention pond level from a target level. Freely available Python libraries provide the key functionality for the MPC workflow: step-by-step running of the SWMM5 simulation, evolutionary algorithm implementation, and leveraging parallel computing. For perspective, the MPC results were compared tomore »results from a rule-based approach and a scenario with no active control. The MPC approach produced a control policy that largely eliminated flooding (unlike the scenario with no active control) and maintained the detention pond’s water level closer to a target level (unlike the rule-based approach).« less
  4. Model predictive control (MPC) provides a useful means for controlling systems with constraints, but suffers from the computational burden of repeatedly solving an optimization problem in real time. Offline (explicit) solutions for MPC attempt to alleviate real time computational challenges using either multiparametric programming or machine learning. The multiparametric approaches are typically applied to linear or quadratic MPC problems, while learning-based approaches can be more flexible and are less memory-intensive. Existing learning-based approaches offer significant speedups, but the challenge becomes ensuring constraint satisfaction while maintaining good performance. In this paper, we provide a neural network parameterization of MPC policies that explicitly encodes the constraints of the problem. By exploring the interior of the MPC feasible set in an unsupervised learning paradigm, the neural network finds better policies faster than projection-based methods and exhibits substantially shorter solve times. We use the proposed policy to solve a robust MPC problem, and demonstrate the performance and computational gains on a standard test system.
  5. In this paper, we propose a leader-follower hierarchical strategy for two robots collaboratively transporting an object in a partially known environment with obstacles. Both robots sense the local surrounding environment and react to obstacles in their proximity. We consider no explicit communication, so the local environment information and the control actions are not shared between the robots. At any given time step, the leader solves a model predictive control (MPC) problem with its known set of obstacles and plans a feasible trajectory to complete the task. The follower estimates the inputs of the leader and uses a policy to assist the leader while reacting to obstacles in its proximity. The leader infers obstacles in the follower’s vicinity by using the difference between the predicted and the real-time estimated follower control action. A method to switch the leader-follower roles is used to improve the control performance in tight environments. The efficacy of our approach is demonstrated with detailed comparisons to two alternative strategies, where it achieves the highest success rate, while completing the task fastest.