Abstract Buildings use a large amount of energy in the United States. It is important to optimally manage and coordinate the resources across building and power distribution networks to improve overall efficiency. Optimizing the power grid with discrete variables was very challenging for traditional computers and algorithms, as it is an NP-hard problem. In this study, we developed a new optimization solution based on quantum computing for BTG integration. We first used MPC for building loads connected with a commercial distribution grid for cost reduction. Then we used discretization and Benders Decomposition methods to reformulate the problem and decompose the continuous and discrete variables, respectively. We used D-Wave quantum computer to solve dual problems and used conventional algorithm for primal problems. We applied the proposed method to an IEEE 9-bus network with 3 commercial buildings and over 300 residential buildings to evaluate the feasibility and effectiveness. Compared with traditional optimization methods, we obtained similar solutions with some fluctuations and improved computational speed from hours to seconds. The time of quantum computing was greatly reduced to less than 1% of traditional optimization algorithm and software such as MATLAB. Quantum computing has proved the potential to solve large-scale discrete optimization problems for urban energy systems.
more »
« less
Nested Benders’s decomposition of capacity-planning problems for electricity systems with hydroelectric and renewable generation
Abstract Nested Benders’s decomposition is an efficient means to solve large-scale optimization problems with a natural time sequence of decisions. This paper examines the use of the technique to decompose and solve efficiently capacity-expansion problems for electricity systems with hydroelectric and renewable generators. To this end we develop an archetypal planning model that captures key features of hydroelectric and renewable generators and apply it to a case study that is based on the Columbia River system in the northwestern United States of America. We apply standard network and within-year temporal simplifications to reduce the problem’s size. Nevertheless, the remaining problem is large-scale and we demonstrate the use of nested Benders’s decomposition to solve it. We explore refinements of the decomposition method which yield further performance improvements. Overall, we show that nested Benders’s decomposition yields good computational performance with minimal loss of model fidelity.
more »
« less
- Award ID(s):
- 1808169
- PAR ID:
- 10485714
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Computational Management Science
- Volume:
- 21
- Issue:
- 1
- ISSN:
- 1619-697X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract While the problem of mechanized proof of liveness of reactive programs has been studied for decades, there is currently no method of proving liveness that is conceptually simple to apply in practice to realistic problems, can be scaled to large problems without modular decomposition, and does not fail unpredictably due to the use of fragile heuristics. We introduce a method of liveness proof by relational rankings, implement it, and show that it meets these criteria in a realistic industrial case study involving a model of the memory subsystem in a CPU.more » « less
-
SUMMARY A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA.more » « less
-
Abstract The generalized singular value decomposition (GSVD) is a valuable tool that has many applications in computational science. However, computing the GSVD for large‐scale problems is challenging. Motivated by applications in hyper‐differential sensitivity analysis (HDSA), we propose new randomized algorithms for computing the GSVD which use randomized subspace iteration and weighted QR factorization. Detailed error analysis is given which provides insight into the accuracy of the algorithms and the choice of the algorithmic parameters. We demonstrate the performance of our algorithms on test matrices and a large‐scale model problem where HDSA is used to study subsurface flow.more » « less
-
Abstract A grand challenge to solve a large-scale linear inverse problem (LIP) is to retain computational efficiency and accuracy regardless of the growth of the problem size. Despite the plenitude of methods available for solving LIPs, various challenges have emerged in recent times due to the sheer volume of data, inadequate computational resources to handle an oversized problem, security and privacy concerns, and the interest in the associated incremental or decremental problems. Removing these barriers requires a holistic upgrade of the existing methods to be computationally efficient, tractable, and equipped with scalable features. We, therefore, develop the parallel residual projection (PRP), a parallel computational framework involving the decomposition of a large-scale LIP into sub-problems of low complexity and the fusion of the sub-problem solutions to form the solution to the original LIP. We analyze the convergence properties of the PRP and accentuate its benefits through its application to complex problems of network inference and gravimetric survey. We show that any existing algorithm for solving an LIP can be integrated into the PRP framework and used to solve the sub-problems while handling the prevailing challenges.more » « less
An official website of the United States government
