skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Eckman, David J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Lam, H; Azar, E; Batur, D; Gao, S; Xie, W; Hunter, S R; Rossetti, M D (Ed.)
    This paper studies the allocation of simulation effort in a ranking-and-selection (R&S) problem with the goal of selecting a system whose performance is within a given tolerance of the best. We apply large-deviations theory to derive an optimal allocation for maximizing the rate at which the so-called probability of good selection (PGS) asymptotically approaches one, assuming that systems’ output distributions are known. An interesting property of the optimal allocation is that some good systems may receive a sampling ratio of zero. We demonstrate through numerical experiments that this property leads to serious practical consequences, specifically when designing adaptive R&S algorithms. In particular, we observe that the convergence and even consistency of a simple plug-in algorithm designed for the PGS goal can be negatively impacted. We offer empirical evidence of these challenges and a preliminary exploration of a potential correction. 
    more » « less
    Free, publicly-accessible full text available December 15, 2025
  2. Corlu, C G; Hunter, S R; Lam, H; Onggo, B S; Shortle, J; Biller, B (Ed.)
    Screening procedures for ranking and selection have received less attention than selection procedures, yet they serve as a cheap and powerful tool for decision making under uncertainty. Research on screening procedures has been less active in recent years, just as the advent of parallel computing has dramatically reshaped how selection procedures are designed and implemented. As a result, screening procedures used in modern practice continue to largely operate offline on fixed data. In this tutorial, we provide an overview of screening procedures with the goal of clarifying the current state of research and laying out opportunities for future development. We discuss several guarantees delivered by screening procedures and their role in different decision-making settings and investigate their impact on screening power and sampling efficiency in numerical experiments. We also study the implementation of screening procedures in parallel computing environments and how they can be combined with selection procedures. 
    more » « less
  3. Corlu, C G; Hunter, S R; Lam, H; Onggo, B S; Shortle, J; Biller, B (Ed.)
    Stochastic constraints, which constrain an expectation in the context of simulation optimization, can be hard to conceptualize and harder still to assess. As with a deterministic constraint, a solution is considered either feasible or infeasible with respect to a stochastic constraint. This perspective belies the subjective nature of stochastic constraints, which often arise when attempting to avoid alternative optimization formulations with multiple objectives or an aggregate objective with weights. Moreover, a solution’s feasibility with respect to a stochastic constraint cannot, in general, be ascertained based on only a finite number of simulation replications. We introduce different means of estimating how “close” the expected performance of a given solution is to being feasible with respect to one or more stochastic constraints. We explore how these metrics and their bootstrapped error estimates can be incorporated into plots showing a solver’s progress over time when solving a stochastically constrained problem. 
    more » « less
  4. Stochastic constraints, which constrain an expectation in the context of simulation optimization, can be hard to conceptualize and harder still to assess. As with a deterministic constraint, a solution is considered either feasible or infeasible with respect to a stochastic constraint. This perspective belies the subjective nature of stochastic constraints, which often arise when attempting to avoid alternative optimization formulations with multiple objectives or an aggregate objective with weights. Moreover, a solution’s feasibility with respect to a stochastic constraint cannot, in general, be ascertained based on only a finite number of simulation replications. We introduce different means of estimating how “close” the expected performance of a given solution is to being feasible with respect to one or more stochastic constraints. We explore how these metrics and their bootstrapped error estimates can be incorporated into plots showing a solver’s progress over time when solving a stochastically constrained problem. 
    more » « less
  5. This paper introduces a major redesign of SimOpt, a testbed of simulation-optimization (SO) problems and solvers. The testbed promotes the empirical evaluation and comparison of solvers and aims to accelerate their development. Relative to previous versions of SimOpt, the redesign ports the code to an object-oriented architecture in Python; uses an implementation of the MRG32k3a random number generator that supports streams, substreams, and subsubstreams; supports the automated use of common random numbers for ease and efficiency; includes a powerful suite of plotting tools for visualizing experiment results; uses bootstrapping to obtain error estimates; accommodates the use of data farming to explore simulation models and optimization solvers as their input parameters vary; and provides a graphical user interface. The SimOpt source code is available on a GitHub repository under a permissive open-source license and as a Python package. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: This work was supported by the National Science Foundation [Grant CMMI-2035086]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1273 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2022.0011 ) at ( http://dx.doi.org/10.5281/zenodo.7468744 ). 
    more » « less
  6. Simulation optimization involves optimizing some objective function that can only be estimated via stochastic simulation. Many important problems can be profitably viewed within this framework. Whereas many solvers—implementations of simulation-optimization algorithms—exist or are in development, comparisons among solvers are not standardized and are often limited in scope. Such comparisons help advance solver development, clarify the relative performance of solvers, and identify classes of problems that defy efficient solution, among many other uses. We develop performance measures and plots, and estimators thereof, to evaluate and compare solvers and diagnose their strengths and weaknesses on a testbed of simulation-optimization problems. We explain the need for two-level simulation in this context and provide supporting convergence theory. We also describe how to use bootstrapping to obtain error estimates for the estimators. History: Accepted by Bruno Tuffin, area editor for simulation. Funding: This work was supported by the National Science Foundation [Grants CMMI-2035086, CMMI-2206972, and TRIPODS+X DMS-1839346]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplementary Information [ https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.1261 ] or is available from the IJOC GitHub software repository ( https://github.com/INFORMSJoC ) at [ http://dx.doi.org/10.5281/zenodo.7329235 ]. 
    more » « less
  7. Feng, B.; Pedrielli, G; Peng, Y.; Shashaani, S.; Song, E.; Corlu, C. G.; Lee, L.H.; Chew, E. P.; Roeder, T.; Lendermann, P. (Ed.)
    Automatic differentiation (AD) can provide infinitesimal perturbation analysis (IPA) derivative estimates directly from simulation code. These gradient estimators are simple to obtain analytically, at least in principle, but may be tedious to derive and implement in code. AD software tools aim to ease this workload by requiring little more than writing the simulation code. We review considerations when choosing an AD tool for simulation, demonstrate how to apply some specific AD tools to simulation, and provide insightful experiments highlighting the effects of different choices to be made when applying AD in simulation. 
    more » « less
  8. Sequential ranking-and-selection procedures deliver Bayesian guarantees by repeatedly computing a posterior quantity of interest—for example, the posterior probability of good selection or the posterior expected opportunity cost—and terminating when this quantity crosses some threshold. Computing these posterior quantities entails nontrivial numerical computation. Thus, rather than exactly check such posterior-based stopping rules, it is common practice to use cheaply computable bounds on the posterior quantity of interest, for example, those based on Bonferroni’s or Slepian’s inequalities. The result is a conservative procedure that samples more simulation replications than are necessary. We explore how the time spent simulating these additional replications might be better spent computing the posterior quantity of interest via numerical integration, with the potential for terminating the procedure sooner. To this end, we develop several methods for improving the computational efficiency of exactly checking the stopping rules. Simulation experiments demonstrate that the proposed methods can, in some instances, significantly reduce a procedure’s total sample size. We further show these savings can be attained with little added computational effort by making effective use of a Monte Carlo estimate of the posterior quantity of interest. Summary of Contribution: The widespread use of commercial simulation software in industry has made ranking-and-selection (R&S) algorithms an accessible simulation-optimization tool for operations research practitioners. This paper addresses computational aspects of R&S procedures delivering finite-time Bayesian statistical guarantees, primarily the decision of when to terminate sampling. Checking stopping rules entails computing or approximating posterior quantities of interest perceived as being computationally intensive to evaluate. The main results of this paper show that these quantities can be efficiently computed via numerical integration and can yield substantial savings in sampling relative to the prevailing approach of using conservative bounds. In addition to enhancing the performance of Bayesian R&S procedures, the results have the potential to advance other research in this space, including the development of more efficient allocation rules. 
    more » « less
  9. When working with models that allow for many candidate solutions, simulation practitioners can benefit from screening out unacceptable solutions in a statistically controlled way. However, for large solution spaces, estimating the performance of all solutions through simulation can prove impractical. We propose a statistical framework for screening solutions even when only a relatively small subset of them is simulated. Our framework derives its superiority over exhaustive screening approaches by leveraging available properties of the function that describes the performance of solutions. The framework is designed to work with a wide variety of available functional information and provides guarantees on both the confidence and consistency of the resulting screening inference. We provide explicit formulations for the properties of convexity and Lipschitz continuity and show through numerical examples that our procedures can efficiently screen out many unacceptable solutions. 
    more » « less