skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Managing Computationally Expensive Blackbox Multiobjective Optimization Problems with Libensemble
Multiobjective optimization problems (MOPs) are common across many science and engineering fields. A multiobjective optimization algorithm (MOA) seeks to provide an approximation to the tradeoff surface between multiple, possibly conflicting, objectives. Many MOPs are the result of objective functions that require the evaluation of a computationally expensive numerical simulation. Solving these large and complex problems requires efficient coordination between the MOA and the computationally expensive cost functions. In this work, a recently proposed MOA is integrated into the libEnsemble software library, which coordinates extreme scale resources for large ensemble computations. Efficient integration requires fundamental changes to the underlying MOA. The convergence and performance results for the integrated and original MOA are compared on a set of benchmark problems.  more » « less
Award ID(s):
1838271
PAR ID:
10294427
Author(s) / Creator(s):
Date Published:
Journal Name:
2020 Spring Simulation Conference (SpringSim '20)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Bayesian optimization is a sequential decision making framework for optimizing expensive-to-evaluate black-box functions. Computing a full lookahead policy amounts to solving a highly intractable stochastic dynamic program. Myopic approaches, such as expected improvement, are often adopted in practice, but they ignore the long-term impact of the immediate decision. Existing nonmyopic approaches are mostly heuristic and/or computationally expensive. In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree. Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly, in a "one-shot" fashion. Combining this with an efficient method for implementing multi-step Gaussian process "fantasization," we demonstrate that multi-step expected improvement is computationally tractable and exhibits performance superior to existing methods on a wide range of benchmarks. 
    more » « less
  2. null (Ed.)
    Design optimization of metamaterials and other complex systems often relies on the use of computationally expensive models. This makes it challenging to use global multi-objective optimization approaches that require many function evaluations. Engineers often have heuristics or rules of thumb with potential to drastically reduce the number of function evaluations needed to achieve good convergence. Recent research has demonstrated that these design heuristics can be used explicitly in design optimization, indeed leading to accelerated convergence. However, these approaches have only been demonstrated on specific problems, the performance of different methods was diverse, and despite all heuristics being ``correct'', some heuristics were found to perform much better than others for various problems. In this paper, we describe a case study in design heuristics for a simple class of 2D constrained multiobjective optimization problems involving lattice-based metamaterial design. Design heuristics are strategically incorporated into the design search and the heuristics-enabled optimization framework is compared with the standard optimization framework not using the heuristics. Results indicate that leveraging design heuristics for design optimization can help in reaching the optimal designs faster. We also identify some guidelines to help designers choose design heuristics and methods to incorporate them for a given problem at hand. 
    more » « less
  3. null (Ed.)
    Design optimization of metamaterials and other complex systems often relies on the use of computationally expensive models. This makes it challenging to use global multi-objective optimization approaches that require many function evaluations. Engineers often have heuristics or rules of thumb with potential to drastically reduce the number of function evaluations needed to achieve good convergence. Recent research has demonstrated that these design heuristics can be used explicitly in design optimization, indeed leading to accelerated convergence. However, these approaches have only been demonstrated on specific problems, the performance of different methods was diverse, and despite all heuristics being correct'', some heuristics were found to perform much better than others for various problems. In this paper, we describe a case study in design heuristics for a simple class of 2D constrained multiobjective optimization problems involving lattice-based metamaterial design. Design heuristics are strategically incorporated into the design search and the heuristics-enabled optimization framework is compared with the standard optimization framework not using the heuristics. Results indicate that leveraging design heuristics for design optimization can help in reaching the optimal designs faster. We also identify some guidelines to help designers choose design heuristics and methods to incorporate them for a given problem at hand. 
    more » « less
  4. Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta- learning, continual learning, and reinforcement learning. Conventional BO methods need to differentiate through the low-level optimization process with implicit dif- ferentiation, which requires expensive calculations related to the Hessian matrix. There has been a recent quest for first-order methods for BO, but the methods pro- posed to date tend to be complicated and impractical for large-scale deep learning applications. In this work, we propose a simple first-order BO algorithm that de- pends only on first-order gradient information, requires no implicit differentiation, and is practical and efficient for large-scale non-convex functions in deep learning. We provide a non-asymptotic convergence analysis of the proposed method to stationary points for non-convex objectives and present empirical results that show its superior practical performance. 
    more » « less
  5. NA (Ed.)
    This work addresses a Multi-Objective Shortest Path Problem (MO-SPP) on a graph where the goal is to find a set of Pareto-optimal solutions from a start node to a destination in the graph. A family of approaches based on MOA* have been developed to solve MO-SPP in the literature. Typically, these approaches maintain a “frontier” set at each node during the search process to keep track of the non-dominated, partial paths to reach that node. This search process becomes computationally expensive when the number of objectives increases as the number of Pareto-optimal solutions becomes large. In this work, we introduce a new method to efficiently maintain these frontiers for multiple objectives by incrementally constructing balanced binary search trees within the MOA* search framework. We first show that our approach correctly finds the Pareto-optimal front, and then provide extensive simulation results for problems with three, four and five objectives to show that our method runs faster than existing techniques by up to an order of magnitude. 
    more » « less