skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stochastic Decomposition for Two-Stage Stochastic Linear Programs with Random Cost Coefficients
Stochastic decomposition (SD) has been a computationally effective approach to solve large-scale stochastic programming (SP) problems arising in practical applications. By using incremental sampling, this approach is designed to discover an appropriate sample size for a given SP instance, thus precluding the need for either scenario reduction or arbitrary sample sizes to create sample average approximations (SAA). When compared with the solutions obtained using the SAA procedure, SD provides solutions of similar quality in far less computational time using ordinarily available computational resources. However, previous versions of SD were not applicable to problems with randomness in second-stage cost coefficients. In this paper, we extend its capabilities by relaxing this assumption on cost coefficients in the second stage. In addition to the algorithmic enhancements necessary to achieve this, we also present the details of implementing these extensions, which preserve the computational edge of SD. Finally, we illustrate the computational results obtained from the latest implementation of SD on a variety of test instances generated for problems from the literature. We compare these results with those obtained from the regularized L-shaped method applied to the SAA function of these problems with different sample sizes.  more » « less
Award ID(s):
1822327
PAR ID:
10281828
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
INFORMS Journal on Computing
Volume:
33
Issue:
1
ISSN:
1091-9856
Page Range / eLocation ID:
51 to 71
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Stochastic Programming (SP) is used in disaster management, supply chain design, and other complex problems. Many of the real-world problems that SP is applied to produce large-size models. It is important but challenging that they are optimized quickly and efficiently. Existing optimization algorithms are limited in capability of solving these larger problems. Sample Average Approximation (SAA) method is a common approach for solving large scale SP problems by using the Monte Carlo simulation. This paper focuses on applying clustering algorithms to the data before the random sample is selected for the SAA algorithm. Once clustered, a sample is randomly selected from each of the clusters instead of from the entire dataset. This project looks to analyze five clustering techniques compared to each other and compared to the original SAA algorithm in order to see if clustering improves both the speed and the optimal solution of the SAA method for solving stochastic optimization problems. 
    more » « less
  2. This paper studies sample average approximation (SAA) and its simple regularized variation in solving convex or strongly convex stochastic programming problems. Under heavy-tailed assumptions and comparable regularity conditions as in the typical SAA literature, we show — perhaps for the first time — that the sample complexity can be completely free from any complexity measure (e.g., logarithm of the covering number) of the feasible region. As a result, our new bounds can be more advantageous than the state-of-the-art in terms of the dependence on the problem dimensionality. 
    more » « less
  3. We study two-stage stochastic optimization problems with random recourse, where the coefficients of the adaptive decisions involve uncertain parameters. To deal with the infinite-dimensional recourse decisions, we propose a scalable approximation scheme via piecewise linear and piecewise quadratic decision rules. We develop a data-driven distributionally robust framework with two layers of robustness to address distributional uncertainty. We also establish out-of-sample performance guarantees for the proposed scheme. Applying known ideas, the resulting optimization problem can be reformulated as an exact copositive program that admits semidefinite programming approximations. We design an iterative decomposition algorithm, which converges under some regularity conditions, to reduce the runtime needed to solve this program. Through numerical examples for various known operations management applications, we demonstrate that our method produces significantly better solutions than the traditional sample-average approximation scheme especially when the data are limited. For the problem instances for which only the recourse cost coefficients are random, our method exhibits slightly inferior out-of-sample performance but shorter runtimes compared with a competing approach. 
    more » « less
  4. null (Ed.)
    Managing large-scale systems often involves simultaneously solving thousands of unrelated stochastic optimization problems, each with limited data. Intuition suggests that one can decouple these unrelated problems and solve them separately without loss of generality. We propose a novel data-pooling algorithm called Shrunken-SAA that disproves this intuition. In particular, we prove that combining data across problems can outperform decoupling, even when there is no a priori structure linking the problems and data are drawn independently. Our approach does not require strong distributional assumptions and applies to constrained, possibly nonconvex, nonsmooth optimization problems such as vehicle-routing, economic lot-sizing, or facility location. We compare and contrast our results to a similar phenomenon in statistics (Stein’s phenomenon), highlighting unique features that arise in the optimization setting that are not present in estimation. We further prove that, as the number of problems grows large, Shrunken-SAA learns if pooling can improve upon decoupling and the optimal amount to pool, even if the average amount of data per problem is fixed and bounded. Importantly, we highlight a simple intuition based on stability that highlights when and why data pooling offers a benefit, elucidating this perhaps surprising phenomenon. This intuition further suggests that data pooling offers the most benefits when there are many problems, each of which has a small amount of relevant data. Finally, we demonstrate the practical benefits of data pooling using real data from a chain of retail drug stores in the context of inventory management. This paper was accepted by Chung Piaw Teo, Special Issue on Data-Driven Prescriptive Analytics. 
    more » « less
  5. Complex, multimission space exploration campaigns are particularly vulnerable to payload development and launch delays due to program-level schedule constraints and interactions between the payloads. While deterministic space logistics problems seek strongly performing (e.g., minimized cost) solutions, stochastic models must balance performance with robustness. The introduction of stochastic delays to the otherwise deterministic problem produces large and computationally intractable optimization problems. This paper presents and compares two multi-objective (minimized cost vs robustness) formulations for the stochastic campaign scheduling problem. First, a multi-objective mixed-integer quadratically constrained program (MOMIQCP) formulation is presented. Secondly, due to the computational intractability of the MOMIQCP for large problems, a method for constructing restricted, deterministic scheduling subproblems is defined. These subproblems are input to a noisy multi-objective evolutionary algorithm (NMOEA), which is used for the purpose of stochastically applying delays to the deterministic subproblem and building approximations of the objectives of the stochastic problems. Both methods are demonstrated through case studies, and the results demonstrate that the NMOEA can successfully find strongly performing solutions to larger stochastic scheduling problems. 
    more » « less