Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
n this paper, we focus on the problem of shrinking-horizon Model Predictive Control (MPC) in uncertain dynamic environments. We consider controlling a deterministic autonomous system that interacts with uncontrollable stochastic agents during its mission. Employing tools from conformal prediction, existing works derive high-confidence prediction regions for the unknown agent trajectories, and integrate these regions in the design of suitable safety constraints for MPC. Despite guaranteeing probabilistic safety of the closed-loop trajectories, these constraints do not ensure feasibility of the respective MPC schemes for the entire duration of the mission. We propose a shrinking-horizon MPC that guarantees recursive feasibility via a gradual relaxation of the safety constraints as new prediction regions become available online. This relaxation enforces the safety constraints to hold over the least restrictive prediction region from the set of all available prediction regions. In a comparative case study with the state of the art, we empirically show that our approach results in tighter prediction regions and verify recursive feasibility of our MPC scheme.more » « lessFree, publicly-accessible full text available July 16, 2025
-
Free, publicly-accessible full text available May 7, 2025
-
The wide availability of data coupled with the computational advances in artificial intelligence and machine learning promise to enable many future technologies such as autonomous driving. While there has been a variety of successful demonstrations of these technologies, critical system failures have repeatedly been reported. Even if rare, such system failures pose a serious barrier to adoption without a rigorous risk assessment. This article presents a framework for the systematic and rigorous risk verification of systems. We consider a wide range of system specifications formulated in signal temporal logic (STL) and model the system as a stochastic process, permitting discrete-time and continuous-time stochastic processes. We then define the STL robustness risk as the risk of lacking robustness against failure . This definition is motivated as system failures are often caused by missing robustness to modeling errors, system disturbances, and distribution shifts in the underlying data generating process. Within the definition, we permit general classes of risk measures and focus on tail risk measures such as the value-at-risk and the conditional value-at-risk. While the STL robustness risk is in general hard to compute, we propose the approximate STL robustness risk as a more tractable notion that upper bounds the STL robustness risk. We show how the approximate STL robustness risk can accurately be estimated from system trajectory data. For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly. We illustrate our verification algorithm in the autonomous driving simulator CARLA and show how a least risky controller can be selected among four neural network lane-keeping controllers for five meaningful system specifications.more » « less