Solving Convex Discrete Optimization via Simulation via Stochastic Localization Algorithms Many decision-making problems in operations research and management science require the optimization of large-scale complex stochastic systems. For a number of applications, the objective function exhibits convexity in the discrete decision variables or the problem can be transformed into a convex one. In “Stochastic Localization Methods for Convex Discrete Optimization via Simulation,” Zhang, Zheng, and Lavaei propose provably efficient simulation-optimization algorithms for general large-scale convex discrete optimization via simulation problems. By utilizing the convex structure and the idea of localization and cutting-plane methods, the developed stochastic localization algorithms demonstrate a polynomial dependence on the dimension and scale of the decision space. In addition, the simulation cost is upper bounded by a value that is independent of the objective function. The stochastic localization methods also exhibit a superior numerical performance compared with existing algorithms.
more »
« less
Monte Carlo Fictitious Play for Finding Pure Nash Equilibria in Identical Interest Games
This paper addresses the broader topic of using game theoretical learning mechanisms to efficiently and effectively identify relevant (e.g., optimal and non-mixed) solutions to large scale optimization problems. The longer-term goal is for the proposed MCFP-variants to become established methods for finding pure Nash equilibria and global optima of large-scale problems.
more »
« less
- Award ID(s):
- 2046588
- PAR ID:
- 10528847
- Publisher / Repository:
- INFORMS
- Date Published:
- Journal Name:
- INFORMS Journal on Optimization
- ISSN:
- 2575-1484
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract A grand challenge to solve a large-scale linear inverse problem (LIP) is to retain computational efficiency and accuracy regardless of the growth of the problem size. Despite the plenitude of methods available for solving LIPs, various challenges have emerged in recent times due to the sheer volume of data, inadequate computational resources to handle an oversized problem, security and privacy concerns, and the interest in the associated incremental or decremental problems. Removing these barriers requires a holistic upgrade of the existing methods to be computationally efficient, tractable, and equipped with scalable features. We, therefore, develop the parallel residual projection (PRP), a parallel computational framework involving the decomposition of a large-scale LIP into sub-problems of low complexity and the fusion of the sub-problem solutions to form the solution to the original LIP. We analyze the convergence properties of the PRP and accentuate its benefits through its application to complex problems of network inference and gravimetric survey. We show that any existing algorithm for solving an LIP can be integrated into the PRP framework and used to solve the sub-problems while handling the prevailing challenges.more » « less
-
null (Ed.)A classical problem in city-scale cyber-physical systems (CPS) is resource allocation under uncertainty. Spatial-temporal allocation of resources is optimized to allocate electric scooters across urban areas, place charging stations for vehicles, and design efficient on-demand transit. Typically, such problems are modeled as Markov (or semi-Markov) decision processes. While online, offline, and decentralized methodologies have been used to tackle such problems, none of the approaches scale well for large-scale decision problems. We create a general approach to hierarchical planning that leverages structure in city-level CPS problems to tackle resource allocation under uncertainty. We use emergency response as a case study and show how a large resource allocation problem can be split into smaller problems. We then create a principled framework for solving the smaller problems and tackling the interaction between them. Finally, we use real-world data from a major metropolitan area in the United States to validate our approach. Our experiments show that the proposed approach outperforms state-of-the-art approaches used in the field of emergency response.more » « less
-
N/A (Ed.)Abstract Partial differential equation (PDE)-constrained inverse problems are some of the most challenging and computationally demanding problems in computational science today. Fine meshes required to accurately compute the PDE solution introduce an enormous number of parameters and require large-scale computing resources such as more processors and more memory to solve such systems in a reasonable time. For inverse problems constrained by time-dependent PDEs, the adjoint method often employed to compute gradients and higher order derivatives efficiently requires solving a time-reversed, so-called adjoint PDE that depends on the forward PDE solution at each timestep. This necessitates the storage of a high-dimensional forward solution vector at every timestep. Such a procedure quickly exhausts the available memory resources. Several approaches that trade additional computation for reduced memory footprint have been proposed to mitigate the memory bottleneck, including checkpointing and compression strategies. In this work, we propose a close-to-ideal scalable compression approach using autoencoders to eliminate the need for checkpointing and substantial memory storage, thereby reducing the time-to-solution and memory requirements. We compare our approach with checkpointing and an off-the-shelf compression approach on an earth-scale ill-posed seismic inverse problem. The results verify the expected close-to-ideal speedup for the gradient and Hessian-vector product using the proposed autoencoder compression approach. To highlight the usefulness of the proposed approach, we combine the autoencoder compression with the data-informed active subspace (DIAS) prior showing how the DIAS method can be affordably extended to large-scale problems without the need for checkpointing and large memory.more » « less
-
Abstract Nested Benders’s decomposition is an efficient means to solve large-scale optimization problems with a natural time sequence of decisions. This paper examines the use of the technique to decompose and solve efficiently capacity-expansion problems for electricity systems with hydroelectric and renewable generators. To this end we develop an archetypal planning model that captures key features of hydroelectric and renewable generators and apply it to a case study that is based on the Columbia River system in the northwestern United States of America. We apply standard network and within-year temporal simplifications to reduce the problem’s size. Nevertheless, the remaining problem is large-scale and we demonstrate the use of nested Benders’s decomposition to solve it. We explore refinements of the decomposition method which yield further performance improvements. Overall, we show that nested Benders’s decomposition yields good computational performance with minimal loss of model fidelity.more » « less
An official website of the United States government

