skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Decision Rule Approach for Two-Stage Data-Driven Distributionally Robust Optimization Problems with Random Recourse
We study two-stage stochastic optimization problems with random recourse, where the coefficients of the adaptive decisions involve uncertain parameters. To deal with the infinite-dimensional recourse decisions, we propose a scalable approximation scheme via piecewise linear and piecewise quadratic decision rules. We develop a data-driven distributionally robust framework with two layers of robustness to address distributional uncertainty. We also establish out-of-sample performance guarantees for the proposed scheme. Applying known ideas, the resulting optimization problem can be reformulated as an exact copositive program that admits semidefinite programming approximations. We design an iterative decomposition algorithm, which converges under some regularity conditions, to reduce the runtime needed to solve this program. Through numerical examples for various known operations management applications, we demonstrate that our method produces significantly better solutions than the traditional sample-average approximation scheme especially when the data are limited. For the problem instances for which only the recourse cost coefficients are random, our method exhibits slightly inferior out-of-sample performance but shorter runtimes compared with a competing approach.  more » « less
Award ID(s):
2342505 2343869
PAR ID:
10488578
Author(s) / Creator(s):
;
Publisher / Repository:
INFORMS Journal on Computing
Date Published:
Journal Name:
INFORMS Journal on Computing
Volume:
36
Issue:
2
ISSN:
1091-9856
Page Range / eLocation ID:
526-542
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the problem of finding the Löwner–John ellipsoid (i.e., an ellipsoid with minimum volume that contains a given convex set). We reformulate the problem as a generalized copositive program and use that reformulation to derive tractable semidefinite programming approximations for instances where the set is defined by affine and quadratic inequalities. We prove that, when the underlying set is a polytope, our method never provides an ellipsoid of higher volume than the one obtained by scaling the maximum volume-inscribed ellipsoid. We empirically demonstrate that our proposed method generates high-quality solutions and can be solved much faster than solving the problem to optimality. Furthermore, we outperform the existing approximation schemes in terms of solution time and quality. We present applications of our method to obtain piecewise linear decision rule approximations for dynamic distributionally robust problems with random recourse and to generate ellipsoidal approximations for the set of reachable states in a linear dynamical system when the set of allowed controls is a polytope. 
    more » « less
  2. We study the assortment optimization problem when customer choices are governed by the paired combinatorial logit model. We study unconstrained, cardinality-constrained, and knapsack-constrained versions of this problem, which are all known to be NP-hard. We design efficient algorithms that compute approximately optimal solutions, using a novel relation to the maximum directed cut problem and suitable linear-program rounding algorithms. We obtain a randomized polynomial time approximation scheme for the unconstrained version and performance guarantees of 50% and [Formula: see text] for the cardinality-constrained and knapsack-constrained versions, respectively. These bounds improve significantly over prior work. We also obtain a performance guarantee of 38.5% for the assortment problem under more general constraints, such as multidimensional knapsack (where products have multiple attributes and there is a knapsack constraint on each attribute) and partition constraints (where products are partitioned into groups and there is a limit on the number of products selected from each group). In addition, we implemented our algorithms and tested them on random instances available in prior literature. We compared our algorithms against an upper bound obtained using a linear program. Our average performance bounds for the unconstrained, cardinality-constrained, knapsack-constrained, and partition-constrained versions are over 99%, 99%, 96%, and 99%, respectively. 
    more » « less
  3. Trefftz methods are high-order Galerkin schemes in which all discrete functions are elementwise solution of the PDE to be approximated. They are viable only when the PDE is linear and its coefficients are piecewise-constant. We introduce a “quasi-Trefftz” discontinuous Galerkin (DG) method for the discretisation of the acoustic wave equation with piecewise-smooth material parameters: the discrete functions are elementwise approximate PDE solutions. We show that the new discretisation enjoys the same excellent approximation properties as the classical Trefftz one, and prove stability and high-order convergence of the DG scheme. We introduce polynomial basis functions for the new discrete spaces and describe a simple algorithm to compute them. The technique we propose is inspired by the generalised plane waves previously developed for time-harmonic problems with variable coefficients; it turns out that in the case of the time-domain wave equation under consideration the quasi-Trefftz approach allows for polynomial basis functions. 
    more » « less
  4. We study the assignment problem with chance constraints (CAP) and its distributionally robust counterpart DR-CAP. We present a technique for estimating big-M in such a formulation that takes advantage of the ambiguity set. We consider a 0-1 bilinear knapsack set to develop valid inequalities for CAP and DR-CAP. This is generalized to the joint chance constraint problem. A probability cut framework is also developed to solve DR-CAP. A computational study on problem instances obtained from using real hospital surgery data shows that the developed techniques allow us to solve certain model instances and reduce the computational time for others. The use of Wasserstein ambiguity set in the DR-CAP model improves the out-of-sample performance of satisfying the chance constraints more significantly than the one possible by increasing the sample size in the sample average approximation technique. The solution time for DR-CAP model instances is of the same order as that for solving the CAP instances. This finding is important because chance constrained optimization models are very difficult to solve when the coefficients in the constraints are random. 
    more » « less
  5. Chan, Timothy; Fischer, Johannes; Iacono, John; Herman, Grzegorz (Ed.)
    We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with positive real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph G = (V, E), vertex demands b ∈ R^V such that ∑_{v ∈ V} b(v) = 0, positive edge costs c ∈ R_{≥ 0}^E, and a parameter ε > 0. In O(ε^{-2} m log^{O(1)} n) time, it returns a flow f such that the net flow out of each vertex is equal to the vertex’s demand and the cost of the flow is within a (1 ± ε) factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the Ω(n²) vertex-vertex distances that an approximation of this kind suggests, we also take advantage of the clustering method used in the well-known Thorup-Zwick distance oracle. 
    more » « less