The formulation of Bayesian inverse problems involves choosing prior distributions; choices that seem equally reason-able may lead to significantly different conclusions. We develop a computational approach to understand the impact of the hyperparameters defining the prior on the posterior statistics of the quantities of interest. Our approach relies on global sensitivity analysis (GSA) of Bayesian inverse problems with respect to the prior hyperparameters. This, however, is a challenging problem-a naive double loop sampling approach would require running a prohibitive number of Markov chain Monte Carlo (MCMC) sampling procedures. The present work takes a foundational step in making such a sensitivity analysis practical by combining efficient surrogate models and a tailored importance sampling approach. In particular, we can perform accurate GSA of posterior statistics of quantities of interest with respect to prior hyperparameters without the need to repeat MCMC runs. We demonstrate the effectiveness of the approach on a simple Bayesian linear inverse problem and a nonlinear inverse problem governed by an epidemiological model.
more »
« less
This content will become publicly available on July 1, 2026
A Novel Objective Reduction Algorithm for Nonlinear Many-Objective Optimization Problems
Sustainability is increasingly recognized as a critical global issue. Multi-objective optimization is an important approach for sustainable decision-making, but problems with four or more objectives are hard to interpret due to its high dimensions. In our groups previous work, an algorithm capable of systematically reducing objective dimensionality for (mixed integer) linear Problem has been developed. In this work, we will extend the algorithm to tackle nonlinear many-objective problems. An outer approximation-like method is employed to systematically replace nonlinear objectives and constraints. After converting the original nonlinear problem to linear one, previous linear algorithm can be applied to reduce the dimensionality. The benchmark DTLZ5(I, M) problem set is used to evaluate the effectiveness of this approach. Our algorithm demonstrates the ability to identify appropriate objective groupings on benchmark problems of up to 20 objectives when algorithm hyperparameters are appropriately chosen. We also conduct extensive testing on the hyperparameters to determine their optimal settings. Additionally, we analyze the computation time required for different components of the algorithm, ensuring efficiency and practical applicability.
more »
« less
- Award ID(s):
- 2237284
- PAR ID:
- 10656308
- Publisher / Repository:
- Systems & Control Transactions
- Date Published:
- Page Range / eLocation ID:
- 1456 to 1461
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Parallel-in-Time Probabilistic Solutions for Time-Dependent Nonlinear Partial Differential EquationsWe present an efficient probabilistic solver for time-dependent nonlinear partial differential equations.We formulate our method as the maximum a posteriori solver for a constrained risk problem on a reproducing kernel Hilbert space induced by a spatio-temporal Gaussian process prior. We show that for a suitable choice of temporal kernels, the risk objective can be minimized efficiently via a Gauss–Newton algorithm cor- responding to an iterated extended Kalman smoother (IEKS). Furthermore, by leveraging a parallel-in-time implementation of IEKS, our algorithm can take advantage of massively parallel graphical processing units to achieve logarithmic instead of linear scaling with time. We validate our method numerically on popular benchmark problems.more » « less
-
Linear discriminant analysis (LDA) is widely used for dimensionality reduction under supervised learning settings. Traditional LDA objective aims to minimize the ratio of squared Euclidean distances that may not perform optimally on noisy data sets. Multiple robust LDA objectives have been proposed to address this problem, but their implementations have two major limitations. One is that their mean calculations use the squared l2-norm distance to center the data, which is not valid when the objective does not use the Euclidean distance. The second problem is that there is no generalized optimization algorithm to solve different robust LDA objectives. In addition, most existing algorithms can only guarantee the solution to be locally optimal, rather than globally optimal. In this paper, we review multiple robust loss functions and propose a new and generalized robust objective for LDA. Besides, to better remove the mean value within data, our objective uses an optimal way to center the data through learning. As one important algorithmic contribution, we derive an efficient iterative algorithm to optimize the resulting non-smooth and non-convex objective function. We theoretically prove that our solution algorithm guarantees that both the objective and the solution sequences converge to globally optimal solutions at a sub-linear convergence rate. The experimental results demonstrate the effectiveness of our new method, achieving significant improvements compared to the other competing methods.more » « less
-
The expanding role of reinforcement learning (RL) in safety-critical system design has promoted ω-automata as a way to express learning requirements—often non-Markovian—with greater ease of expression and interpretation than scalar reward signals. However, real-world sequential decision making situations often involve multiple, potentially conflicting, objectives. Two dominant approaches to express relative preferences over multiple objectives are: (1)weighted preference, where the decision maker provides scalar weights for various objectives, and (2)lexicographic preference, where the decision maker provides an order over the objectives such that any amount of satisfaction of a higher-ordered objective is preferable to any amount of a lower-ordered one. In this article, we study and develop RL algorithms to compute optimal strategies in Markov decision processes against multiple ω-regular objectives under weighted and lexicographic preferences. We provide a translation from multiple ω-regular objectives to a scalar reward signal that is bothfaithful(maximising reward means maximising probability of achieving the objectives under the corresponding preference) andeffective(RL quickly converges to optimal strategies). We have implemented the translations in a formal reinforcement learning tool,Mungojerrie, and we present an experimental evaluation of our technique on benchmark learning problems.more » « less
-
NA (Ed.)This work addresses a Multi-Objective Shortest Path Problem (MO-SPP) on a graph where the goal is to find a set of Pareto-optimal solutions from a start node to a destination in the graph. A family of approaches based on MOA* have been developed to solve MO-SPP in the literature. Typically, these approaches maintain a “frontier” set at each node during the search process to keep track of the non-dominated, partial paths to reach that node. This search process becomes computationally expensive when the number of objectives increases as the number of Pareto-optimal solutions becomes large. In this work, we introduce a new method to efficiently maintain these frontiers for multiple objectives by incrementally constructing balanced binary search trees within the MOA* search framework. We first show that our approach correctly finds the Pareto-optimal front, and then provide extensive simulation results for problems with three, four and five objectives to show that our method runs faster than existing techniques by up to an order of magnitude.more » « less
An official website of the United States government
