The offline pickup and delivery problem with time windows (PDPTW) is a classical combinatorial optimization problem in the transportation community, which has proven to be very challenging computationally. Due to the complexity of the problem, practical problem instances can be solved only via heuristics, which trade-off solution quality for computational tractability. Among the various heuristics, a common strategy is problem decomposition, that is, the reduction of a large-scale problem into a collection of smaller sub-problems, with spatial and temporal decompositions being two natural approaches. While spatial decomposition has been successful in certain settings, effective temporal decomposition has been challenging due to the difficulty of stitching together the sub-problem solutions across the decomposition boundaries. In this work, we introduce a novel temporal decomposition scheme for solving a class of PDPTWs that have narrow time windows, for which it is able to provide both fast and high-quality solutions. We utilize techniques that have been popularized recently in the context of online dial-a-ride problems along with the general idea of rolling horizon optimization. To the best of our knowledge, this is the first attempt to solve offline PDPTWs using such an approach. To show the performance and scalability of our framework, we use the optimization of paratransit services as a motivating example. Due to the lack of benchmark solvers similar to ours (i.e., temporal decomposition with an online solver), we compare our results with an offline heuristic algorithm using Google OR-Tools. In smaller problem instances (with an average of 129 requests per instance), the baseline approach is as competitive as our framework. However, in larger problem instances (approximately 2,500 requests per instance), our framework is more scalable and can provide good solutions to problem instances of varying degrees of difficulty, while the baseline algorithm often fails to find a feasible solution within comparable compute times.
more »
« less
This content will become publicly available on April 23, 2026
Generators for Large-Scale Stochastic Simulation-Optimization Experiments
Running stochastic simulation-optimization solvers on a testbed of diverse problem instances allows users to evaluate their empirical performance, but obtaining meaningful results requires executing many replications. When problem instances feature realistic simulators, the associated computational costs are often prohibitively expensive. Cheaper, synthetic problem instances generally fail to preserve essential aspects of simulators, and a solver’s performance on them may not be representative of its performance in practice. We propose a novel class of problem instance designed to imitate important features of simulation test problems and generate representative solver performance data at relatively low computational cost. We augment existing models predominantly used for emulation, namely, Gaussian processes, generalized lambda models, and kernel regression models, with an approximation of a Gaussian copula process. This adaptation facilitates efficient coordinated sampling across solutions (via common random numbers) and across solvers (via the sharing of sample-path functions) while keeping the number of user-specified parameters manageable.
more »
« less
- Award ID(s):
- 2410948
- PAR ID:
- 10600993
- Editor(s):
- Harper, A; Luis, M; Monks, T; Mustafee, N
- Publisher / Repository:
- Proceedings of SW25: The OR Society’s 12th Simulation Workshop
- Date Published:
- Page Range / eLocation ID:
- 229–238
- Subject(s) / Keyword(s):
- simulation-optimization experiments Gaussian processes generalized lambda models kernel regression common random numbers
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract BackgroundStatistical geneticists employ simulation to estimate the power of proposed studies, test new analysis tools, and evaluate properties of causal models. Although there are existing trait simulators, there is ample room for modernization. For example, most phenotype simulators are limited to Gaussian traits or traits transformable to normality, while ignoring qualitative traits and realistic, non-normal trait distributions. Also, modern computer languages, such as Julia, that accommodate parallelization and cloud-based computing are now mainstream but rarely used in older applications. To meet the challenges of contemporary big studies, it is important for geneticists to adopt new computational tools. ResultsWe present , an open-source Julia package that makes it trivial to quickly simulate phenotypes under a variety of genetic architectures. This package is integrated into our OpenMendel suite for easy downstream analyses. Julia was purpose-built for scientific programming and provides tremendous speed and memory efficiency, easy access to multi-CPU and GPU hardware, and to distributed and cloud-based parallelization. is designed to encourage flexible trait simulation, including via the standard devices of applied statistics, generalized linear models (GLMs) and generalized linear mixed models (GLMMs). also accommodates many study designs: unrelateds, sibships, pedigrees, or a mixture of all three. (Of course, for data with pedigrees or cryptic relationships, the simulation process must include the genetic dependencies among the individuals.) We consider an assortment of trait models and study designs to illustrate integrated simulation and analysis pipelines. Step-by-step instructions for these analyses are available in our electronic Jupyter notebooks on Github. These interactive notebooks are ideal for reproducible research. ConclusionThe package has three main advantages. (1) It leverages the computational efficiency and ease of use of Julia to provide extremely fast, straightforward simulation of even the most complex genetic models, including GLMs and GLMMs. (2) It can be operated entirely within, but is not limited to, the integrated analysis pipeline of OpenMendel. And finally (3), by allowing a wider range of more realistic phenotype models, brings power calculations and diagnostic tools closer to what investigators might see in real-world analyses.more » « less
-
Model and Simulation Scalability Traits for Interaction (Nexus) Modeling of Water and Energy SystemsCristina R. Martin, Azam Khan (Ed.)It is beneficial to combine simulation models via I/O data exchanges. The Knowledge Interchange Broker (KIB) modeling approach can be used to develop interaction models that also have time, state, operations, and concurrency. A unique advantage of the interaction model is the composed models can have their own specifications. The KIB is used to model the nexus of the water and energy models of city metropolises. The RESTful WEAP, LEAP, and DEVS-Suite are used to model and simulate the composition of hybrid water, nexus, and energy models. The performance measurements of the simulations of these integrated simulators are evaluated. The results show the DEVS and Algorithmic interaction models have nearly identical computational times. These simulation times are contrasted with the use of links that share data between WEAP and LEAP models. This research highlights the interaction model flexibility and visibility at almost twice the computational time cost for data sharing.more » « less
-
Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.more » « less
-
This paper introduces a library for cross-simulator comparison of reinforcement learning models in trafc signal control tasks. This library is developed to implement recent state-of-the-art reinforcement learning models with extensible interfaces and unifed crosssimulator evaluation metrics. It supports commonly-used simulators in trafc signal control tasks, including Simulation of Urban MObility(SUMO) and CityFlow, and multiple benchmark datasets for fair comparisons. We conducted experiments to validate our implementation of the models and to calibrate the simulators so that the experiments from one simulator could be referential to the other. Based on the validated models and calibrated environments, this paper compares and reports the performance of current state-of-theart RL algorithms across diferent datasets and simulators. This is the frst time that these methods have been compared fairly under the same datasets with diferent simulators.more » « less
An official website of the United States government
