Scenario‐based model predictive control (MPC) methods can mitigate the conservativeness inherent to open‐loop robust MPC. Yet, the scenarios are often generated offline based on worst‐case uncertainty descriptions obtained
This content will become publicly available on August 20, 2024
Wasserstein Distributionally Robust LinearQuadratic Estimation under Martingale Constraints
We focus on robust estimation of the unobserved state of a discretetime stochastic system with linear dynamics. A standard analysis of this estimation problem assumes a baseline innovation model; with Gaussian innovations we recover the Kalman filter. However, in many settings, there is insufficient or corrupted data to validate the baseline model. To cope with this problem, we minimize the worstcase meansquared estimation error of adversarial models chosen within a Wasserstein neighborhood around the baseline. We also constrain the adversarial innovations to form a martingale difference sequence. The martingale constraint relaxes the i.i.d. assumptions which are often imposed on the baseline model. Moreover, we show that the martingale constraints guarantee that the adversarial dynamics remain adapted to the natural timegenerated information. Therefore, adding the martingale constraint allows to improve upon overconservative policies that also protect against unrealistic omniscient adversaries. We establish a strong duality result which we use to develop an efficient subgradient method to compute the distributionally robust estimation policy. If the baseline innovations are Gaussian, we show that the worstcase adversary remains Gaussian. Our numerical experiments indicate that the martingale constraint may also aid in adding a layer of robustness in the choice of the adversarial power.
more »
« less
 NSFPAR ID:
 10483198
 Publisher / Repository:
 Proceedings of The 26th International Conference on Artificial Intelligence and Statistics
 Date Published:
 Journal Name:
 Proceedings of Machine Learning Research
 Volume:
 206
 ISSN:
 26403498
 Page Range / eLocation ID:
 86298644
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Summary a priori , which can in turn limit the improvements in the robust control performance. To this end, this paper presents a learning‐based, adaptive‐scenario‐tree model predictive control approach for uncertain nonlinear systems with time‐varying and/or hard‐to‐model dynamics. Bayesian neural networks (BNNs) are used to learn a state‐ and input‐dependent description of model uncertainty, namely the mismatch between a nominal (physics‐based or data‐driven) model of a system and its actual dynamics. We first present a new approach for training robust BNNs (RBNNs) using probabilistic Lipschitz bounds to provide a less conservative uncertainty quantification. Then, we present an approach to evaluate the credible intervals of RBNN predictions and determine the number of samples required for estimating the credible intervals given a credible level. The performance of RBNNs is evaluated with respect to that of standard BNNs and Gaussian process (GP) as a basis of comparison. The RBNN description of plant‐model mismatch with verified accurate credible intervals is employed to generate adaptive scenarios online for scenario‐based MPC (sMPC). The proposed sMPC approach with adaptive scenario tree can improve the robust control performance with respect to sMPC with a fixed, worst‐case scenario tree and with respect to an adaptive‐scenario‐based MPC (asMPC) using GP regression on a cold atmospheric plasma system. Furthermore, closed‐loop simulation results illustrate that robust model uncertainty learning via RBNNs can enhance the probability of constraint satisfaction of asMPC. 
Massoulie, Laurent (Ed.)Spreading processes on graphs arise in a host of application domains, from the study of online social networks to viral marketing to epidemiology. Various discretetime probabilistic models for spreading processes have been proposed. These are used for downstream statistical estimation and prediction problems, often involving messages or other information that is transmitted along with infections caused by the process. These models generally model cascade behavior at a small time scale but are insufficiently flexible to model cascades that exhibit intermittent behavior governed by multiple scales. We argue that the presence of such time scales that are unaccounted for by a cascade model can result in degradation of performance of models on downstream statistical and timesensitive optimization tasks. To address these issues, we formulate a model that incorporates multiple temporal scales of cascade behavior. This model is parameterized by a \emph{clock}, which encodes the times at which sessions of cascade activity start. These sessions are themselves governed by a smallscale cascade model, such as the discretized independent cascade (IC) model. Estimation of the multiscale cascade model parameters leads to the problem of \emph{clock estimation} in terms of a natural distortion measure that we formulate. Our framework is inspired by the optimization problem posed by DiTursi et al, 2017, which can be seen as providing one possible estimator (a maximumproxylikelihood estimator) for the parameters of our generative model. We give a clock estimation algorithm, which we call FastClock, that runs in linear time in the size of its input and is provably statistically accurate for a broad range of model parameters when cascades are generated from any spreading process model with wellconcentrated session infection set sizes and when the underlying graph is at least in the semisparse regime. We exemplify our algorithm for the case where the smallscale model is the discretized independent cascade process and extend substantially to processes whose infection set sizes satisfy a general martingale difference property. We further evaluate the performance of FastClock empirically in comparison to the state of the art estimator from DiTursi et al, 2017. We find that in a broad parameter range on synthetic networks and on a real network, our algorithm substantially outperforms that algorithm in terms of both running time and accuracy. In all cases, our algorithm's running time is asymptotically lower than that of the baseline.more » « less

We consider optimal transportbased distributionally robust optimization (DRO) problems with locally strongly convex transport cost functions and affine decision rules. Under conventional convexity assumptions on the underlying loss function, we obtain structural results about the value function, the optimal policy, and the worstcase optimal transport adversarial model. These results expose a rich structure embedded in the DRO problem (e.g., strong convexity even if the nonDRO problem is not strongly convex, a suitable scaling of the Lagrangian for the DRO constraint, etc., which are crucial for the design of efficient algorithms). As a consequence of these results, one can develop efficient optimization procedures that have the same sample and iteration complexity as a natural nonDRO benchmark algorithm, such as stochastic gradient descent.more » « less

A classic reachability problem for safety of dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risksensitive reachability approach for safety of stochastic dynamic systems under nonadversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risksensitive safe set asa set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional ValueatRisk(CVaR) measure. Second, we show how the computation of a risksensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risksensitive safe set and provide arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risksensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worstcase (which is typical for HamiltonJacobi reachability analysis) to riskneutral (which is the case for stochastic reachability analysis).more » « less

A classic reachability problem for safety of dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risksensitive reachability approach for safety of stochastic dynamic systems under nonadversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risksensitive safe set as a set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional ValueatRisk (CVaR) measure. Second, we show how the computation of a risksensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risksensitive safe set, and provide theoretical arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risksensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worstcase (which is typical for HamiltonJacobi reachability analysis) to riskneutral (which is the case for stochastic reachability analysis).more » « less