Stochastic constraints, which constrain an expectation in the context of simulation optimization, can be hard to conceptualize and harder still to assess. As with a deterministic constraint, a solution is considered either feasible or infeasible with respect to a stochastic constraint. This perspective belies the subjective nature of stochastic constraints, which often arise when attempting to avoid alternative optimization formulations with multiple objectives or an aggregate objective with weights. Moreover, a solution’s feasibility with respect to a stochastic constraint cannot, in general, be ascertained based on only a finite number of simulation replications. We introduce different means of estimating how “close” the expected performance of a given solution is to being feasible with respect to one or more stochastic constraints. We explore how these metrics and their bootstrapped error estimates can be incorporated into plots showing a solver’s progress over time when solving a stochastically constrained problem.
more »
« less
Stochastic Constraints: How feasible is feasible?
Stochastic constraints, which constrain an expectation in the context of simulation optimization, can be hard to conceptualize and harder still to assess. As with a deterministic constraint, a solution is considered either feasible or infeasible with respect to a stochastic constraint. This perspective belies the subjective nature of stochastic constraints, which often arise when attempting to avoid alternative optimization formulations with multiple objectives or an aggregate objective with weights. Moreover, a solution’s feasibility with respect to a stochastic constraint cannot, in general, be ascertained based on only a finite number of simulation replications. We introduce different means of estimating how “close” the expected performance of a given solution is to being feasible with respect to one or more stochastic constraints. We explore how these metrics and their bootstrapped error estimates can be incorporated into plots showing a solver’s progress over time when solving a stochastically constrained problem.
more »
« less
- Award ID(s):
- 2035086
- PAR ID:
- 10522663
- Editor(s):
- Corlu, C G; Hunter, S R; Lam, H; Onggo, B S; Shortle, J; Biller, B
- Publisher / Repository:
- ACM
- Date Published:
- Page Range / eLocation ID:
- 3589-3600
- Format(s):
- Medium: X
- Location:
- Proceedings of the 2023 Winter Simulation Conference
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Surrogate based optimization (SBO) methods have gained popularity in the field of constrained optimization of expensive black-box functions. However, constraint handling methods do not usually guarantee strictly feasible candidates during optimization. This can become an issue in applied engineering problems where design variables must remain feasible for simulations to not fail. We propose a simple constraint-handling method for computationally inexpensive constraint functions which guarantees strictly feasible candidates when using a surrogate-based optimizer. We compare our method to other SBO algorithms and an EA on five analytical test functions, and an applied fully-resolved Computational Fluid Dynamics (CFD) problem concerned with optimization of an undulatory swimming of a fish-like body, and show that the proposed algorithm shows favorable results while guaranteeing feasible candidates.more » « less
-
We propose a framework and specific algorithms for screening a large (perhaps countably infinite) spaceof feasible solutions to generate a subset containing the optimal solution with high confidence. We attainthis goal even when only a small fraction of the feasible solutions are simulated. To accomplish it weexploit structural information about the space of functions within which the true objective function lies, andthen assess how compatible optimality is for each feasible solution with respect to the observed simulation outputs and the assumed function space. The result is a set of plausible optima. This approach can be viewed as a way to avoid slow simulation by leveraging fast optimization. Explicit formulations of the general approach are provided when the space of functions is either Lipschitz or convex. We establish both small- and large-sample properties of the approach, and provide two numerical examples.more » « less
-
We consider the problem of finding a system with the best primary performance measure among a finite number of simulated systems in the presence of subjective stochastic constraints on secondary performance measures. When no feasible system exists, the decision maker may be willing to relax some constraint thresholds. We take multiple threshold values for each constraint as a user’s input and propose indifference-zone procedures that perform the phases of feasibility check and selection-of-the-best sequentially or simultaneously. Given that there is no change in the underlying simulated systems, our procedures recycle simulation observations to conduct feasibility checks across all potential thresholds. We prove that the proposed procedures yield the best system in the most desirable feasible region possible with at least a pre-specified probability. Our experimental results show that our procedures perform well with respect to the number of observations required to make a decision, as compared with straight-forward procedures that repeatedly solve the problem for each set of constraint thresholds, and that our simultaneously-running procedure provides the best overall performance.more » « less
-
Abstract An interior-point algorithm framework is proposed, analyzed, and tested for solving nonlinearly constrained continuous optimization problems. The main setting of interest is when the objective and inequality constraint functions may be nonlinear and/or nonconvex, and when constraint values and derivatives are tractable to compute, but objective function values and derivatives can only be estimated. The algorithm is intended primarily for a setting that is similar for stochastic-gradient methods for unconstrained optimization, i.e., the setting when stochastic-gradient estimates are available and employed in place of gradients, and when no objective function values (nor estimates of them) are employed. This is achieved by the interior-point framework having a single-loop structure rather than the nested-loop structure that is typical of contemporary interior-point methods. Convergence guarantees for the framework are provided both for deterministic and stochastic settings. Numerical experiments show that the algorithm yields good performance on a large set of test problems.more » « less
An official website of the United States government

