skip to main content


Search for: All records

Editors contains: "Feng, B"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    Ranking&selection (R&S) procedures are simulation-optimization algorithms for making one-time decisions among a finite set of alternative system designs or feasible solutions with a statistical assurance of a good selection. R&S with covariates (R&S+C) extends the paradigm to allow the optimal selection to depend on contextual information that is obtained just prior to the need for a decision. The dominant approach for solving such problems is to employ offline simulation to create metamodels that predict the performance of each system or feasible solution as a function of the covariate. This paper introduces a fundamentally different approach that solves individual R&S problems offline for various values of the covariate, and then treats the real-time decision as a classification problem: given the covariate information, which system is a good solution? Our approach exploits the availability of efficient R&S procedures, requires milder assumptions than the metamodeling paradigm to provide strong guarantees, and can be more efficient. 
    more » « less
  2. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    Many tutorials and survey papers have been written on ranking & selection because it is such a useful tool for simulation optimization when the number of feasible solutions or “systems” is small enough that all of them can be simulated. Cheap, ubiquitous, parallel computing has greatly increased the “all of them can be simulated” limit. Naturally these tutorials and surveys have focused on the underlying theory of R&S and have provided pseudocode procedures. This tutorial, by contrast, emphasizes applications, programming and interpretation of R&S, using the R programming language for illustration. Readers (and the audience) can download the code and follow along with the examples, but no experience with R is needed. 
    more » « less
  3. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    Ranking & selection (R&S) procedures are simulation-optimization algorithms for making one-time decisions among a finite set of alternative system designs or feasible solutions with a statistical assurance of a good selection. R&S with covariates (R&S+C) extends the paradigm to allow the optimal selection to depend on contextual information that is obtained just prior to the need for a decision. The dominant approach for solving such problems is to employ offline simulation to create metamodels that predict the performance of each system or feasible solution as a function of the covariate. This paper introduces a fundamentally different approach that solves individual R&S problems offline for various values of the covariate, and then treats the real-time decision as a classification problem: given the covariate information, which system is a good solution? Our approach exploits the availability of efficient R&S procedures, requires milder assumptions than the metamodeling paradigm to provide strong guarantees, and can be more efficient. 
    more » « less
  4. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    The Rapid Gaussian Markov Improvement Algorithm (rGMIA) solves discrete optimization via simulation problems by using a Gaussian Markov random field and complete expected improvement as the sampling and stopping criterion. rGMIA has been created as a sequential sampling procedure run on a single processor. In this paper, we extend rGMIA to a parallel computing environment when q+1 solutions can be simulated in parallel. To this end, we introduce the q-point complete expected improvement criterion to determine a batch of q+1 solutions to simulate. This new criterion is implemented in a new object-oriented rGMIA package. 
    more » « less
  5. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    The Rapid Gaussian Markov Improvement Algorithm (rGMIA) solves discrete optimization via simulation problems by using a Gaussian Markov random field and complete expected improvement as the sampling and stopping criterion. rGMIA has been created as a sequential sampling procedure run on a single processor. In this paper, we extend rGMIA to a parallel computing environment when q+1 solutions can be simulated in parallel. To this end, we introduce the q-point complete expected improvement criterion to determine a batch of q+1 solutions to simulate. This new criterion is implemented in a new object-oriented rGMIA package. 
    more » « less
  6. B. Feng, B ; G. Pedrielli, G ; Peng, Y ; Shashaani, S. ; Song, E. ; Corlu, C. ; Lee, L. ; Chew, E. ; Roeder, T. ; Lendermann, P. (Ed.)
    Many tutorials and survey papers have been written on ranking & selection because it is such a useful tool for simulation optimization when the number of feasible solutions or “systems” is small enough that all of them can be simulated. Cheap, ubiquitous, parallel computing has greatly increased the “all of them can be simulated” limit. Naturally these tutorials and surveys have focused on the underlying theory of R&S and have provided pseudocode procedures. This tutorial, by contrast, emphasizes applications, programming and interpretation of R&S, using the R programming language for illustration. Readers (and the audience) can download the code and follow along with the examples, but no experience with R is needed. 
    more » « less
  7. Kim, S. ; Feng, B. ; Smith, K. ; Masoud, S. ; Zheng, Z. ; Szabo, C. ; Loper, M. (Ed.)
  8. Feng, B. ; Pedrielli, G ; Peng, Y. ; Shashaani, S. ; Song, E. ; Corlu, C. G. ; Lee, L.H. ; Chew, E. P. ; Roeder, T. ; Lendermann, P. (Ed.)
    Automatic differentiation (AD) can provide infinitesimal perturbation analysis (IPA) derivative estimates directly from simulation code. These gradient estimators are simple to obtain analytically, at least in principle, but may be tedious to derive and implement in code. AD software tools aim to ease this workload by requiring little more than writing the simulation code. We review considerations when choosing an AD tool for simulation, demonstrate how to apply some specific AD tools to simulation, and provide insightful experiments highlighting the effects of different choices to be made when applying AD in simulation. 
    more » « less
  9. Bae, K.-H. ; Feng, B. ; Kim, S. ; Lazarova-Molnar, S. ; Zheng, Z. ; Roeder, T. ; Thiesing, R. (Ed.)
    In the subset-selection approach to ranking and selection, a decision-maker seeks a subset of simulated systems that contains the best with high probability. We present a new, generalized framework for constructing these subsets and demonstrate that some existing subset-selection procedures are situated within this framework. The subsets are built by calculating, for each system, a minimum standardized discrepancy between the observed performances and the space of problem instances for which that system is the best. A system’s minimum standardized discrepancy is then compared to a cutoff to determine whether the system is included in the subset. We examine the problem of finding the tightest statistically valid cutoff for each system and draw connections between our approach and other subset-selection methodologies. Simulation experiments demonstrate how the screening power and subset size are affected by the choice of standardized discrepancy. 
    more » « less
  10. Kim, S. ; Feng, B. ; Smith, K. ; Masoud, S ; Zheng, Z. ; Szabo, C. ; Loper, M. (Ed.)