Multi-fidelity Bayesian optimization (MFBO) is a powerful approach that utilizes low-fidelity, cost-effective sources to expedite the exploration and exploitation of a high-fidelity objective function. Existing MFBO methods with theoretical foundations either lack justification for performance improvements over single-fidelity optimization or rely on strong assumptions about the relationships between fidelity sources to construct surrogate models and direct queries to low-fidelity sources. To mitigate the dependency on cross-fidelity assumptions while maintaining the advantages of low-fidelity queries, we introduce a random sampling and partition-based MFBO framework with deep kernel learning. This framework is robust to cross-fidelity model misspecification and explicitly illustrates the benefits of low-fidelity queries. Our results demonstrate that the proposed algorithm effectively manages complex cross-fidelity relationships and efficiently optimizes the target fidelity function.
more »
« less
Analyzing Multi-Fidelity Simulation Optimization with Level Set Approximation Using Probabilistic Branch and Bound
Simulated systems are often described with a variety of models of different complexity. Making use of these models, algorithms can use low complexity, “low-fidelity” models or meta-models to guide sampling for purposes of optimization, improving the probability of generating good solutions with a small number of observations. We propose an optimization algorithm that guides the search for solutions on a high-fidelity model through the approximation of a level set from a low-fidelity model. Using the Probabilistic Branch and Bound algorithm to approximate a level set for the low-fidelity model, we are able to efficiently locate solutions inside of a target quantile and therefore reduce the number of high-fidelity evaluations needed in searches. The paper provides an algorithm and analysis showing the increased probability of sampling high-quality solutions within a low-fidelity level set. We include numerical examples that demonstrate the effectiveness of the multi-fidelity level set approximation method to locate solutions.
more »
« less
- Award ID(s):
- 1632793
- PAR ID:
- 10039380
- Date Published:
- Journal Name:
- Proceedings of the 2017 Winter Simulation Conference
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We discuss applicability of Principal Component Analysis and Particle Swarm Optimization in protein tertiary structure prediction. The proposed algorithm is based on establishing a low-dimensional space where the sampling (and optimization) is carried out via Particle Swarm Optimizer (PSO). The reduced space is found via Principal Component Analysis (PCA) performed for a set of previously found low- energy protein models. A high frequency term is added into this expansion by projecting the best decoy into the PCA basis set and calculating the residual model. Our results show that PSO improves the energy of the best decoy used in the PCA considering an adequate number of PCA terms.more » « less
-
Finding an optimal design for a structural system subject to seismic actions to minimize failure probability, repair costs, and injuries to occupants, significantly contributes to the resilience of buildings in earthquake regions. This research presents a comprehensive framework for the performance-based design optimization of steel structures, incorporating the Performance-Based Earthquake Engineering (PBEE) methodology delineated in FEMA P-58 [1]. A selected set of ground motions, consistent with the seismic hazard intensity of interest, and a nonlinear finite element model, established using OpenSees, enable the assessment of the system's dynamic response. To address the computational complexity related to evaluating the probability of failure of the system during an optimization iteration when using the PBEE methodology for assessing performance, this study introduces metamodeling techniques as a substitute for the original high-fidelity nonlinear finite element model. In particular, Kriging is employed to approximate both the median and standard deviation of the Engineering Demand Parameters (EDPs) in the design domain. The parameters of the Kriging metamodels are derived from nonlinear dynamic analyses performed using the original high-fidelity model and an optimal sampling plan obtained through Latin Hypercube sampling. Under the assumption of a lognormal distribution, the metamodel is then used to generate a large number of simulated demand sets necessary for the Monte Carlo procedure adopted by FEMA P-58 to calculate the distribution of probable losses for any given value of the design variable vector. Additionally, the median and standard deviation of the fragility function modeling collapse are also approximated by a Kriging metamodel, in which the parameters are derived from an Incremental Dynamic Analysis (IDA) for any given value of the design variable vector. The scheme is illustrated in a full-scale case study consisting of the performance-based optimization of the buckling-restrained braces of a steel seismic force-resisting system in terms of expected losses and construction costs. The study demonstrates that the proposed risk-based optimization scheme effectively balances construction costs with expected financial losses from earthquakes, thus enhancing the seismic performance of the system.[1] Applied Technology Council, & National Earthquake Hazards Reduction Program (US). (2012). Seismic performance assessment of buildings. Federal Emergency Management Agency.more » « less
-
Abstract MotivationRNA design is the search for a sequence or set of sequences that will fold to desired structure, also known as the inverse problem of RNA folding. However, the sequences designed by existing algorithms often suffer from low ensemble stability, which worsens for long sequence design. Additionally, for many methods only a small number of sequences satisfying the MFE criterion can be found by each run of design. These drawbacks limit their use cases. ResultsWe propose an innovative optimization paradigm, SAMFEO, which optimizes ensemble objectives (equilibrium probability or ensemble defect) by iterative search and yields a very large number of successfully designed RNA sequences as byproducts. We develop a search method which leverages structure level and ensemble level information at different stages of the optimization: initialization, sampling, mutation, and updating. Our work, while being less complicated than others, is the first algorithm that is able to design thousands of RNA sequences for the puzzles from the Eterna100 benchmark. In addition, our algorithm solves the most Eterna100 puzzles among all the general optimization based methods in our study. The only baseline solving more puzzles than our work is dependent on handcrafted heuristics designed for a specific folding model. Surprisingly, our approach shows superiority on designing long sequences for structures adapted from the database of 16S Ribosomal RNAs. Availability and implementationOur source code and data used in this article is available at https://github.com/shanry/SAMFEO.more » « less
-
null (Ed.)Uncertainty is an omnipresent issue in real-world optimization problems. This paper studies a fundamental problem concerning uncertainty, known as the β-robust scheduling problem. Given a set of identical machines and a set of jobs whose processing times follow a normal distribution, the goal is to assign jobs to machines such that the probability that all the jobs are completed by a given common due date is maximized. We give the first systematic study on the complexity and algorithms for this problem. A strong negative result is shown by ruling out the existence of any polynomial-time algorithm with a constant approximation ratio for the general problem unless P=NP. On the positive side, we provide the first FPT-AS (fixed parameter tractable approximation scheme) parameterized by the number of different kinds of jobs, which is a common parameter in scheduling problems. It returns a solution arbitrarily close to the optimal solution, provided that the job processing times follow a few different types of distributions. We further complement the theoretical results by implementing our algorithm. The experiments demonstrate that by choosing an appropriate approximation ratio, the algorithm can efficiently compute a near-optimal solution.more » « less
An official website of the United States government

