This paper introduces a method of identifying a maximal set of safe strategies from data for stochastic systems with unknown dynamics using barrier certificates. The first step is learning the dynamics of the system via Gaussian Process (GP) regression and obtaining probabilistic errors for this estimate. Then, we develop an algorithm for constructing piecewise stochastic barrier functions to find a maximal permissible strategy set using the learned GP model, which is based on sequentially pruning the worst controls until a maximal set is identified. The permissible strategies are guaranteed to maintain probabilistic safety for the true system. This is especially important for learned systems, because a rich strategy space enables additional data collection and complex behaviors while remaining safe. Case studies on linear and nonlinear systems demonstrate that increasing the size of the dataset for learning grows the permissible strategy set.
more »
« less
This content will become publicly available on January 23, 2026
Blending physics with data using an efficient Gaussian process regression with soft inequality and monotonicity constraints
In this work, we propose a new Gaussian process (GP) regression framework that enforces the physical constraints in a probabilistic manner. Specifically, we focus on inequality and monotonicity constraints. This GP model is trained by the quantum-inspired Hamiltonian Monte Carlo (QHMC) algorithm, which is an efficient way to sample from a broad class of distributions by allowing a particle to have a random mass matrix with a probability distribution. Integrating the QHMC into the inequality and monotonicity constrained GP regression in the probabilistic sense, our approach enhances the accuracy and reduces the variance in the resulting GP model. Additionally, the probabilistic aspect of the method leads to reduced computational expenses and execution time. Further, we present an adaptive learning algorithm that guides the selection of constraint locations. The accuracy and efficiency of the method are demonstrated in estimating the hyperparameter of high-dimensional GP models under noisy conditions, reconstructing the sparsely observed state of a steady state heat transport problem, and learning a conservative tracer distribution from sparse tracer concentration measurements.
more »
« less
- Award ID(s):
- 2143915
- PAR ID:
- 10617033
- Publisher / Repository:
- Frontiers
- Date Published:
- Journal Name:
- Frontiers in Mechanical Engineering
- Volume:
- 10
- ISSN:
- 2297-3079
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Accurate and uncertainty-aware wind power forecasting is essential for reliable and cost-effective power system operations. This paper presents a novel probabilistic forecasting framework based on diffusion probabilistic models. We adopted a two-stage modeling strategy—a deterministic predictor first generates baseline forecasts, and a conditional diffusion model then learns the distribution of residual errors. Such a two-stage decoupling strategy improves learning efficiency and sharpens uncertainty estimation. We employed the elucidated diffusion model (EDM) to enable flexible noise control and enhance calibration, stability, and expressiveness. For the generative backbone, we introduced a time-series-specific diffusion Transformer (TimeDiT) that incorporates modular conditioning to separately fuse numerical weather prediction (NWP) inputs, noise, and temporal features. The proposed method was evaluated using the public database from ten wind farms in the Global Energy Forecasting Competition 2014 (GEFCom2014). We further compared our approach with two popular baseline models, i.e., a distribution parameter regression model and a generative adversarial network (GAN)-based model. Results showed that our method consistently achieves superior performance in both deterministic metrics and probabilistic accuracy, offering better forecast calibration and sharper distributions.more » « less
-
Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.more » « less
-
null (Ed.)A novel algorithm for computing monotone order six piecewise polynomial interpolants is proposed. Algebraic constraints for enforcing monotonicity are provided that align with quintic monotonicity theory. The algorithm is implemented, tested, and applied to several sample problems to demonstrate the improved accuracy of monotone quintic spline interpolants compared to the previous state-of-the-art monotone cubic spline interpolants.more » « less
-
Active learning with generalized sliced inverse regression for high-dimensional reliability analysisIt is computationally expensive to predict reliability using physical models at the design stage if many random input variables exist. This work introduces a dimension reduction technique based on generalized sliced inverse regression (GSIR) to mitigate the curse of dimensionality. The proposed high dimensional reliability method enables active learning to integrate GSIR, Gaussian Process (GP) modeling, and Importance Sampling (IS), resulting in an accurate reliability prediction at a reduced computational cost. The new method consists of three core steps, 1) identification of the importance sampling region, 2) dimension reduction by GSIR to produce a sufficient predictor, and 3) construction of a GP model for the true response with respect to the sufficient predictor in the reduced-dimension space. High accuracy and efficiency are achieved with active learning that is iteratively executed with the above three steps by adding new training points one by one in the region with a high chance of failure.more » « less
An official website of the United States government
