- PAR ID:
- 10488578
- Publisher / Repository:
- INFORMS Journal on Computing
- Date Published:
- Journal Name:
- INFORMS Journal on Computing
- Volume:
- 36
- Issue:
- 2
- ISSN:
- 1091-9856
- Page Range / eLocation ID:
- 526-542
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
We study the problem of finding the Löwner–John ellipsoid (i.e., an ellipsoid with minimum volume that contains a given convex set). We reformulate the problem as a generalized copositive program and use that reformulation to derive tractable semidefinite programming approximations for instances where the set is defined by affine and quadratic inequalities. We prove that, when the underlying set is a polytope, our method never provides an ellipsoid of higher volume than the one obtained by scaling the maximum volume-inscribed ellipsoid. We empirically demonstrate that our proposed method generates high-quality solutions and can be solved much faster than solving the problem to optimality. Furthermore, we outperform the existing approximation schemes in terms of solution time and quality. We present applications of our method to obtain piecewise linear decision rule approximations for dynamic distributionally robust problems with random recourse and to generate ellipsoidal approximations for the set of reachable states in a linear dynamical system when the set of allowed controls is a polytope.more » « less
-
Abstract In this paper, we examine a data‐driven optimization approach to making optimal decisions as evaluated by a trained random forest, where these decisions can be constrained by an arbitrary polyhedral set. We model this optimization problem as a mixed‐integer linear program. We show this model can be solved to optimality efficiently using pareto‐optimal Benders cuts for ensembles containing a modest number of trees. We consider a random forest approximation that consists of sampling a subset of trees and establish that this gives rise to near‐optimal solutions by proving analytical guarantees. In particular, for axis‐aligned trees, we show that the number of trees we need to sample is sublinear in the size of the forest being approximated. Motivated by this result, we propose heuristics inspired by cross‐validation that optimize over smaller forests rather than one large forest and assess their performance on synthetic datasets. We present two case studies on a property investment problem and a jury selection problem. We show this approach performs well against other benchmarks while providing insights into the sensitivity of the algorithm's performance for different parameters of the random forest.
-
We study the assortment optimization problem when customer choices are governed by the paired combinatorial logit model. We study unconstrained, cardinality-constrained, and knapsack-constrained versions of this problem, which are all known to be NP-hard. We design efficient algorithms that compute approximately optimal solutions, using a novel relation to the maximum directed cut problem and suitable linear-program rounding algorithms. We obtain a randomized polynomial time approximation scheme for the unconstrained version and performance guarantees of 50% and [Formula: see text] for the cardinality-constrained and knapsack-constrained versions, respectively. These bounds improve significantly over prior work. We also obtain a performance guarantee of 38.5% for the assortment problem under more general constraints, such as multidimensional knapsack (where products have multiple attributes and there is a knapsack constraint on each attribute) and partition constraints (where products are partitioned into groups and there is a limit on the number of products selected from each group). In addition, we implemented our algorithms and tested them on random instances available in prior literature. We compared our algorithms against an upper bound obtained using a linear program. Our average performance bounds for the unconstrained, cardinality-constrained, knapsack-constrained, and partition-constrained versions are over 99%, 99%, 96%, and 99%, respectively.more » « less
-
Trefftz methods are high-order Galerkin schemes in which all discrete functions are elementwise solution of the PDE to be approximated. They are viable only when the PDE is linear and its coefficients are piecewise-constant. We introduce a “quasi-Trefftz” discontinuous Galerkin (DG) method for the discretisation of the acoustic wave equation with piecewise-smooth material parameters: the discrete functions are elementwise approximate PDE solutions. We show that the new discretisation enjoys the same excellent approximation properties as the classical Trefftz one, and prove stability and high-order convergence of the DG scheme. We introduce polynomial basis functions for the new discrete spaces and describe a simple algorithm to compute them. The technique we propose is inspired by the generalised plane waves previously developed for time-harmonic problems with variable coefficients; it turns out that in the case of the time-domain wave equation under consideration the quasi-Trefftz approach allows for polynomial basis functions.more » « less
-
We study the assignment problem with chance constraints (CAP) and its distributionally robust counterpart DR-CAP. We present a technique for estimating big-M in such a formulation that takes advantage of the ambiguity set. We consider a 0-1 bilinear knapsack set to develop valid inequalities for CAP and DR-CAP. This is generalized to the joint chance constraint problem. A probability cut framework is also developed to solve DR-CAP. A computational study on problem instances obtained from using real hospital surgery data shows that the developed techniques allow us to solve certain model instances and reduce the computational time for others. The use of Wasserstein ambiguity set in the DR-CAP model improves the out-of-sample performance of satisfying the chance constraints more significantly than the one possible by increasing the sample size in the sample average approximation technique. The solution time for DR-CAP model instances is of the same order as that for solving the CAP instances. This finding is important because chance constrained optimization models are very difficult to solve when the coefficients in the constraints are random.more » « less