skip to main content

Title: Submodularity in Conic Quadratic Mixed 0–1 Optimization
We describe strong convex valid inequalities for conic quadratic mixed 0–1 optimization. These inequalities can be utilized for solving numerous practical nonlinear discrete optimization problems from value-at-risk minimization to queueing system design, from robust interdiction to assortment optimization through appropriate conic quadratic mixed 0–1 relaxations. The inequalities exploit the submodularity of the binary restrictions and are based on the polymatroid inequalities over binaries for the diagonal case. We prove that the convex inequalities completely describe the convex hull of a single conic quadratic constraint as well as the rotated cone constraint over binary variables and unbounded continuous variables. We then generalize and strengthen the inequalities by incorporating additional constraints of the optimization problem. Computational experiments on mean-risk optimization with correlations, assortment optimization, and robust conic quadratic optimization indicate that the new inequalities strengthen the convex relaxations substantially and lead to significant performance improvements.
Authors:
;
Award ID(s):
1818700
Publication Date:
NSF-PAR ID:
10188591
Journal Name:
Operations Research
Volume:
68
Issue:
2
Page Range or eLocation-ID:
609-630
ISSN:
0030-364X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mirrokni, V (Ed.)
    Signal estimation problems with smoothness and sparsity priors can be naturally modeled as quadratic optimization with L0-“norm” constraints. Since such problems are non-convex and hard-to-solve, the standard approach is, instead, to tackle their convex surrogates based on L1-norm relaxations. In this paper, we propose new iterative (convex) conic quadratic relaxations that exploit not only the L0-“norm” terms, but also the fitness and smoothness functions. The iterative convexification approach substantially closes the gap between the L0-“norm” and its L1 surrogate. These stronger relaxations lead to significantly better estimators than L1-norm approaches and also allow one to utilize affine sparsity priors. In addition, the parameters of the model and the resulting estimators are easily interpretable. Experiments with a tailored Lagrangian decomposition method indicate that the proposed iterative convex relaxations yield solutions within 1% of the exact L0-approach, and can tackle instances with up to 100,000 variables under one minute.
  2. We study the chance-constrained bin packing problem, with an application to hospital operating room planning. The bin packing problem allocates items of random sizes that follow a discrete distribution to a set of bins with limited capacity, while minimizing the total cost. The bin capacity constraints are satisfied with a given probability. We investigate a big-M and a 0-1 bilinear formulation of this problem. We analyze the bilinear structure of the formulation and use the lifting techniques to identify cover, clique, and projection inequalities to strengthen the formulation. We show that in certain cases these inequalities are facet-defining for a bilinear knapsack constraint that arises in the reformulation. An extensive computational study is conducted for the operating room planning problem that minimizes the number of open operating rooms. The computational tests are performed using problems generated based on real data from a hospital. A lower-bound improvement heuristic is combined with the cuts proposed in this paper in a branch-and-cut framework. The computations illustrate that the techniques developed in this paper can significantly improve the performance of the branch-and-cut method. Problems with up to 1,000 scenarios are solved to optimality in less than an hour. A safe approximation based on conditionalmore »value at risk (CVaR) is also solved. The computations show that the CVaR approximation typically leaves a gap of one operating room (e.g., six instead of five) to satisfy the chance constraint. Summary of Contribution: This paper investigates a branch-and-cut algorithm for a chance-constrained bin packing problem with multiple bins. The chance-constrained bin packing provides a modeling framework for applied operations research problems, such as health care, scheduling, and so on. This paper studies alternative computational approaches to solve this problem. Moreover, this paper uses real data from a hospital operating room planning setting as an application to test the algorithmic ideas. This work, therefore, is at the intersection of computing and operations research. Several interesting ideas are developed and studied. These include a strengthened big-M reformulation, analysis of a bilinear reformulation, and identifying certain facet-defining inequalities for this formulation. This paper also gives a lower-bound generation heuristic for a model that minimizes the number of bins. Computational experiments for an operating room planning model that uses data from a hospital demonstrate the computational improvement and importance of the proposed approaches. The techniques proposed in this paper and computational experiments further enhance the interface of computing and operations research.« less
  3. Abstract We study the low-rank phase retrieval problem, where our goal is to recover a $d_1\times d_2$ low-rank matrix from a series of phaseless linear measurements. This is a fourth-order inverse problem, as we are trying to recover factors of a matrix that have been observed, indirectly, through some quadratic measurements. We propose a solution to this problem using the recently introduced technique of anchored regression. This approach uses two different types of convex relaxations: we replace the quadratic equality constraints for the phaseless measurements by a search over a polytope and enforce the rank constraint through nuclear norm regularization. The result is a convex program in the space of $d_1 \times d_2$ matrices. We analyze two specific scenarios. In the first, the target matrix is rank-$1$, and the observations are structured to correspond to a phaseless blind deconvolution. In the second, the target matrix has general rank, and we observe the magnitudes of the inner products against a series of independent Gaussian random matrices. In each of these problems, we show that anchored regression returns an accurate estimate from a near-optimal number of measurements given that we have access to an anchor matrix of sufficient quality. We also showmore »how to create such an anchor in the phaseless blind deconvolution problem from an optimal number of measurements and present a partial result in this direction for the general rank problem.« less
  4. We develop a variant of the multinomial logit model with impatient customers and study assortment optimization and pricing problems under this choice model. In our choice model, a customer incrementally views the assortment of available products in multiple stages. The patience level of a customer determines the maximum number of stages in which the customer is willing to view the assortments of products. In each stage, if the product with the largest utility provides larger utility than a minimum acceptable utility, which we refer to as the utility of the outside option, then the customer purchases that product right away. Otherwise, the customer views the assortment of products in the next stage as long as the customer’s patience level allows the customer to do so. Under the assumption that the utilities have the Gumbel distribution and are independent, we give a closed-form expression for the choice probabilities. For the assortment-optimization problem, we develop a polynomial-time algorithm to find the revenue-maximizing sequence of assortments to offer. For the pricing problem, we show that, if the sequence of offered assortments is fixed, then we can solve a convex program to find the revenue-maximizing prices, with which the decision variables are the probabilitiesmore »that a customer reaches different stages. We build on this result to give a 0.878-approximation algorithm when both the sequence of assortments and the prices are decision variables. We consider the assortment-optimization problem when each product occupies some space and there is a constraint on the total space consumption of the offered products. We give a fully polynomial-time approximation scheme for this constrained problem. We use a data set from Expedia to demonstrate that incorporating patience levels, as in our model, can improve purchase predictions. We also check the practical performance of our approximation schemes in terms of both the quality of solutions and the computation times.« less
  5. The traveling salesman problem (TSP) is a fundamental problem in combinatorial optimization. Several semidefinite programming relaxations have been proposed recently that exploit a variety of mathematical structures including, for example, algebraic connectivity, permutation matrices, and association schemes. The main results of this paper are twofold. First, de Klerk and Sotirov [de Klerk E, Sotirov R (2012) Improved semidefinite programming bounds for quadratic assignment problems with suitable symmetry. Math. Programming 133(1):75–91.] present a semidefinite program (SDP) based on permutation matrices and symmetry reduction; they show that it is incomparable to the subtour elimination linear program but generally dominates it on small instances. We provide a family of simplicial TSP instances that shows that the integrality gap of this SDP is unbounded. Second, we show that these simplicial TSP instances imply the unbounded integrality gap of every SDP relaxation of the TSP mentioned in the survey on SDP relaxations of the TSP in section 2 of Sotirov [Sotirov R (2012) SDP relaxations for some combinatorial optimization problems. Anjos MF, Lasserre JB, eds., Handbook on Semidefinite, Conic and Polynomial Optimization (Springer, New York), 795–819.]. In contrast, the subtour linear program performs perfectly on simplicial instances. The simplicial instances thus form a natural litmusmore »test for future SDP relaxations of the TSP.« less