skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Computational Comparison of Simulation Optimization Methods using Single Observations within a Shrinking Ball on Noisy Black-Box Functions with Mixed Integer and Continuous Domains
We focus on simulation optimization algorithms that are designed to accommodate noisy black-box functions on mixed integer/continuous domains. There are several approaches used to account for noise which include aggregating multiple function replications from sample points and a newer method of aggregating single replications within a “shrinking ball.” We examine a range of algorithms, including, simulated annealing, interacting particle, covariance-matrix adaption evolutionary strategy, and particle swarm optimization to compare the effectiveness in generating optimal solutions using averaged function replications versus a shrinking ball approximation. We explore problems in mixed integer/continuous domains. Six test functions are examined with 10 and 20 dimensions, with integer restrictions enforced on 0%, 50%, and 100% of the dimensions, and with noise ranging from 10% to 20% of function output. This study demonstrates the relative effectiveness of using the shrinking ball approach, demonstrating that its use typically enhances solver performance for the tested optimization methods.  more » « less
Award ID(s):
1632793
PAR ID:
10039379
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 2017 Winter Simulation Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    A bstract In celestial conformal field theory, gluons are represented by primary fields with dimensions ∆ = 1 + iλ , λ ∈ ℝ and spin J = ±1, in the adjoint representation of the gauge group. All two- and three-point correlation functions of these fields are zero as a consequence of four-dimensional kinematic constraints. Four-point correlation functions contain delta-function singularities enforcing planarity of four-particle scattering events. We relax these constraints by taking a shadow transform of one field and perform conformal block decomposition of the corresponding correlators. We compute the conformal block coefficients. When decomposed in channels that are “compatible” in two and four dimensions, such four-point correlators contain conformal blocks of primary fields with dimensions ∆ = 2 + M + iλ , where M ≥ 0 is an integer, with integer spin J = −M, −M + 2 , … , M − 2 , M . They appear in all gauge group representations obtained from a tensor product of two adjoint representations. When decomposed in incompatible channels, they also contain primary fields with continuous complex spin, but with positive integer dimensions. 
    more » « less
  2. Optimizing the performance of complex systems modeled by stochastic computer simulations is a challenging task, partly because of the lack of structural properties (e.g., convexity). This challenge is magnified by the presence of random error whereby an adaptive algorithm searching for better designs can at times mistakenly accept an inferior design. In contrast to performing multiple simulations at a design point to estimate the performance of the design, we propose a framework for adaptive search algorithms that executes a single simulation for each design point encountered. Here the estimation errors are reduced by averaging the performances from previously evaluated designs drawn from a shrinking ball around the current design point. We show under mild regularity conditions for continuous design spaces that the accumulated errors, although dependent, form a martingale process, and hence, by the strong law of large numbers for martingales, the average errors converge to zero as the algorithm proceeds. This class of algorithms is shown to converge to a global optimum with probability one. By employing a shrinking ball approach with single observations, an adaptive search algorithm can simultaneously improve the estimates of performance while exploring new and potentially better design points. Numerical experiments offer empirical support for this paradigm of single observation simulation optimization. 
    more » « less
  3. Bayesian Networks (BNs) represent conditional probability relations among a set of random variables (nodes) in the form of a directed acyclic graph (DAG), and have found diverse applications in knowledge discovery. We study the problem of learning the sparse DAG structure of a BN from continuous observational data. The central problem can be modeled as a mixed-integer program with an objective function composed of a convex quadratic loss function and a regularization penalty subject to linear constraints. The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions. However, the state-of-the-art optimization solvers are not able to obtain provably optimal solutions to the existing mathematical formulations for medium-size problems within reasonable computational times. To address this difficulty, we tackle the problem from both computational and statistical perspectives. On the one hand, we propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution to the mixed-integer program, and establish the consistency of this approximate solution. On the other hand, we improve the existing formulations by replacing the linear “big-M " constraints that represent the relationship between the continuous and binary indicator variables with second-order conic constraints. Our numerical results demonstrate the effectiveness of the proposed approaches. 
    more » « less
  4. SGD with Momentum (SGDM) is a widely used family of algorithms for large-scale optimization of machine learning problems. Yet, when optimizing generic convex functions, no advantage is known for any SGDM algorithm over plain SGD. Moreover, even the most recent results require changes to the SGDM algorithms, like averaging of the iterates and a projection onto a bounded domain, which are rarely used in practice. In this paper, we focus on the convergence rate of the last iterate of SGDM. For the first time, we prove that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from a suboptimal convergence rate of $$\Omega(\frac{\ln T}{\sqrt{T}})$$ after $$T$$ iterations. Based on this fact, we study a class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with increasing momentum and shrinking updates. For these algorithms, we show that the last iterate has optimal convergence $$O(\frac{1}{\sqrt{T}})$$ for unconstrained convex stochastic optimization problems without projections onto bounded domains nor knowledge of $$T$$. Further, we show a variety of results for FTRL-based SGDM when used with adaptive stepsizes. Empirical results are shown as well. 
    more » « less
  5. SGD with Momentum (SGDM) is a widely used family of algorithms for large-scale optimization of machine learning problems. Yet, when optimizing generic convex functions, no advantage is known for any SGDM algorithm over plain SGD. Moreover, even the most recent results require changes to the SGDM algorithms, like averaging of the iterates and a projection onto a bounded domain, which are rarely used in practice. In this paper, we focus on the convergence rate of the last iterate of SGDM. For the first time, we prove that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from a suboptimal convergence rate of $$\Omega(\frac{\ln T}{\sqrt{T}})$$ after $$T$$ iterations. Based on this fact, we study a class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with \emph{increasing momentum} and \emph{shrinking updates}. For these algorithms, we show that the last iterate has optimal convergence $$O(\frac{1}{\sqrt{T}})$ for unconstrained convex stochastic optimization problems without projections onto bounded domains nor knowledge of $$T$$. Further, we show a variety of results for FTRL-based SGDM when used with adaptive stepsizes. Empirical results are shown as well. 
    more » « less