skip to main content

Title: Improving Column Generation for Vehicle Routing Problems via Random Coloring and Parallelization
We consider a variant of the vehicle routing problem (VRP) where each customer has a unit demand and the goal is to minimize the total cost of routing a fleet of capacitated vehicles from one or multiple depots to visit all customers. We propose two parallel algorithms to efficiently solve the column-generation-based linear-programming relaxation for this VRP. Specifically, we focus on algorithms for the “pricing problem,” which corresponds to the resource-constrained elementary shortest path problem. The first algorithm extends the pulse algorithm for which we derive a new bounding scheme on the maximum load of any route. The second algorithm is based on random coloring from parameterized complexity which can be also combined with other techniques in the literature for improving VRPs, including cutting planes and column enumeration. We conduct numerical studies using VRP benchmarks (with 50–957 nodes) and instances of a medical home care delivery problem using census data in Wayne County, Michigan. Using parallel computing, both pulse and random coloring can significantly improve column generation for solving the linear programming relaxations and we can obtain heuristic integer solutions with small optimality gaps. Combining random coloring with column enumeration, we can obtain improved integer solutions having less than 2% more » optimality gaps for most VRP benchmark instances and less than 1% optimality gaps for the medical home care delivery instances, both under a 30-minute computational time limit. The use of cutting planes (e.g., robust cuts) can further reduce optimality gaps on some hard instances, without much increase in the run time. Summary of Contribution: The vehicle routing problem (VRP) is a fundamental combinatorial problem, and its variants have been studied extensively in the literature of operations research and computer science. In this paper, we consider general-purpose algorithms for solving VRPs, including the column-generation approach for the linear programming relaxations of the integer programs of VRPs and the column-enumeration approach for seeking improved integer solutions. We revise the pulse algorithm and also propose a random-coloring algorithm that can be used for solving the elementary shortest path problem that formulates the pricing problem in the column-generation approach. We show that the parallel implementation of both algorithms can significantly improve the performance of column generation and the random coloring algorithm can improve the solution time and quality of the VRP integer solutions produced by the column-enumeration approach. We focus on algorithmic design for VRPs and conduct extensive computational tests to demonstrate the performance of various approaches. « less
Authors:
; ;
Award ID(s):
1727618 1636876 1940766 1750127
Publication Date:
NSF-PAR ID:
10318932
Journal Name:
INFORMS Journal on Computing
ISSN:
1091-9856
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. The follower must solve an optimization problem that has been previously perturbed by means of a series of attacking actions led by the attacker. These attacking actions aim at augmenting the cost of the decision variables of the follower’s optimization problem. The objective, from the attacker’s viewpoint, is that of choosing an attacking strategy that reduces as much as possible the quality of the optimal solution attainable by the follower. The progressive approximation mechanism consists of the iterative solution of an interdiction problem in which the attacker actions are restricted to a subset of the whole solution space and a pricing subproblem invoked with the objective of proving the optimality of the attacking strategy. This scheme is especially useful when the optimal solutions to the follower’s subproblem intersect with the decision space of the attacker only in a small number of decision variables. In such cases, the progressive approximation method can solve interdiction games otherwise intractable for classical methods. We illustrate the efficiency of our approach on the shortest path, 0-1more »knapsack and facility location interdiction games. Summary of Contribution: In this article, we present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. We exploit the discrete nature of this interdiction game to design an effective algorithmic framework that improves the performance of general-purpose solvers. Our algorithm combines elements from mathematical programming and computer science, including a metaheuristic algorithm, a binary search procedure, a cutting-planes algorithm, and supervalid inequalities. Although we illustrate our results on three specific problems (shortest path, 0-1 knapsack, and facility location), our algorithmic framework can be extended to a broader class of interdiction problems.« less
  2. Cycle representatives of persistent homology classes can be used to provide descriptions of topological features in data. However, the non-uniqueness of these representatives creates ambiguity and can lead to many different interpretations of the same set of classes. One approach to solving this problem is to optimize the choice of representative against some measure that is meaningful in the context of the data. In this work, we provide a study of the effectiveness and computational cost of several ℓ 1 minimization optimization procedures for constructing homological cycle bases for persistent homology with rational coefficients in dimension one, including uniform-weighted and length-weighted edge-loss algorithms as well as uniform-weighted and area-weighted triangle-loss algorithms. We conduct these optimizations via standard linear programming methods, applying general-purpose solvers to optimize over column bases of simplicial boundary matrices. Our key findings are: 1) optimization is effective in reducing the size of cycle representatives, though the extent of the reduction varies according to the dimension and distribution of the underlying data, 2) the computational cost of optimizing a basis of cycle representatives exceeds the cost of computing such a basis, in most data sets we consider, 3) the choice of linear solvers matters a lot to themore »computation time of optimizing cycles, 4) the computation time of solving an integer program is not significantly longer than the computation time of solving a linear program for most of the cycle representatives, using the Gurobi linear solver, 5) strikingly, whether requiring integer solutions or not, we almost always obtain a solution with the same cost and almost all solutions found have entries in { ‐ 1,0,1 } and therefore, are also solutions to a restricted ℓ 0 optimization problem, and 6) we obtain qualitatively different results for generators in Erdős-Rényi random clique complexes than in real-world and synthetic point cloud data.« less
  3. We revisit the classical spectrum allocation (SA) problem, a fundamental subproblem in optical network design, and make three contributions. First, we show how some SA problem instances may be decomposed into smaller instances that may be solved independently without loss of optimality. Second, we prove an optimality property of the well-known first-fit (FF) heuristic. Finally, we leverage this property to develop a recursive and parallel algorithm that applies the FF heuristic to find an optimal solution efficiently. This recursive FF algorithm is highly scalable because of two unique properties: (1) it completely sidesteps the symmetry inherent in SA and hence drastically reduces the solution space compared to typical integer linear programming formulations, and (2) the solution space can be naturally decomposed in non-overlapping subtrees that may be explored in parallel almost independently of each other, resulting in faster than linear speedup.

  4. We study the problem of finding the Löwner–John ellipsoid (i.e., an ellipsoid with minimum volume that contains a given convex set). We reformulate the problem as a generalized copositive program and use that reformulation to derive tractable semidefinite programming approximations for instances where the set is defined by affine and quadratic inequalities. We prove that, when the underlying set is a polytope, our method never provides an ellipsoid of higher volume than the one obtained by scaling the maximum volume-inscribed ellipsoid. We empirically demonstrate that our proposed method generates high-quality solutions and can be solved much faster than solving the problem to optimality. Furthermore, we outperform the existing approximation schemes in terms of solution time and quality. We present applications of our method to obtain piecewise linear decision rule approximations for dynamic distributionally robust problems with random recourse and to generate ellipsoidal approximations for the set of reachable states in a linear dynamical system when the set of allowed controls is a polytope.
  5. null (Ed.)
    We study ranked enumeration of join-query results according to very general orders defined by selective dioids. Our main contribution is a framework for ranked enumeration over a class of dynamic programming problems that generalizes seemingly different problems that had been studied in isolation. To this end, we extend classic algorithms that find the k -shortest paths in a weighted graph. For full conjunctive queries, including cyclic ones, our approach is optimal in terms of the time to return the top result and the delay between results. These optimality properties are derived for the widely used notion of data complexity, which treats query size as a constant. By performing a careful cost analysis, we are able to uncover a previously unknown tradeoff between two incomparable enumeration approaches: one has lower complexity when the number of returned results is small, the other when the number is very large. We theoretically and empirically demonstrate the superiority of our techniques over batch algorithms, which produce the full result and then sort it. Our technique is not only faster for returning the first few results, but on some inputs beats the batch algorithm even when all results are produced.