skip to main content


Title: A Novel Distributed Dynamic Economic Dispatch Based on Dual ADMM and IPM

This paper presents a novel distributed method for dynamic economic dispatch (DED) problems with cubic fuel cost functions based on the dual alternating direction method of multipliers (D‐ADMM) and interior point method (IPM). This scheme enables us to combine the excellent parallelism of ADMM with the superior convergence properties of the IPM. The idea adopted by the proposed algorithm is that the outer‐loop uses ADMM decoupling, and the inner‐loop uses IPM to solve the cubic polynomial optimization subproblems. When the unit set is divided into multiple partitions, D‐ADMM also has the nice central processing unit (CPU) runtime as the well as the number of iterations. In addition, we use a Decoupling‐Decomposition‐Backtracking and parallel improved multiple centrality corrections decoupling interior point method (DDB‐PIMCCD) to solve the aforementioned larger subproblems more efficiently. Finally, the numerical results, in both serial and parallel modes, on a set of 22 DED cases with the range of units from 8 to 1000 units show that the proposed method is very promising for large scale distributed DED problems. Because it can keep the unit information about each subset in secret to suit the market environment and can obtain high‐quality solutions with reasonable time. © 2020 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

 
more » « less
NSF-PAR ID:
10200115
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
IEEJ Transactions on Electrical and Electronic Engineering
Volume:
16
Issue:
1
ISSN:
1931-4973
Page Range / eLocation ID:
p. 44-56
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Statistical relational learning models are powerful tools that combine ideas from first-order logic with probabilistic graphical models to represent complex dependencies. Despite their success in encoding large problems with a compact set of weighted rules, performing inference over these models is often challenging. In this paper, we show how to effectively combine two powerful ideas for scaling inference for large graphical models. The first idea, lifted inference, is a wellstudied approach to speeding up inference in graphical models by exploiting symmetries in the underlying problem. The second idea is to frame Maximum a posteriori (MAP) inference as a convex optimization problem and use alternating direction method of multipliers (ADMM) to solve the problem in parallel. A well-studied relaxation to the combinatorial optimization problem defined for logical Markov random fields gives rise to a hinge-loss Markov random field (HLMRF) for which MAP inference is a convex optimization problem. We show how the formalism introduced for coloring weighted bipartite graphs using a color refinement algorithm can be integrated with the ADMM optimization technique to take advantage of the sparse dependency structures of HLMRFs. Our proposed approach, lifted hinge-loss Markov random fields (LHL-MRFs), preserves the structure of the original problem after lifting and solves lifted inference as distributed convex optimization with ADMM. In our empirical evaluation on real-world problems, we observe up to a three times speed up in inference over HL-MRFs. 
    more » « less
  2. Distributed optimization algorithms are proposed to, potentially, reduce the computational time of large-scale optimization problems, such as security-constrained economic dispatch (SCED). While various geographical decomposition strategies have been presented in the literature, we proposed a temporal decomposition strategy to divide the SCED problem over the considered scheduling horizon. The proposed algorithm breaks SCED over the scheduling time and takes advantage of parallel computing using multi-core machines. In this paper, we investigate how to partition the overall time horizon. We study the effect of the number of partitions (i.e., SCED subproblems) on the overall performance of the distributed coordination algorithm and the effect of partitioning time interval on the optimal solution. In addition, the impact of system loading condition and ramp limits of the generating units on the number of iterations and solution time are analyzed. The results show that by increasing the number of subproblems, the computational burden of each subproblem is reduced, but more shared variables and constraints need to be modeled between the subproblems. This can result in increasing the total number of iterations and consequently the solution time. Moreover, since the load behavior affects the active ramping between the subproblems, the breaking hour determines the difference between shared variables. Hence, the optimal number of subproblems is problem dependent. A 3-bus and the IEEE 118-bus system are selected to analyze the effect of the number of partitions. 
    more » « less
  3. Andrei Ciortea ; Mehdi Dastani ; Jieting Luo (Ed.)
    The Multi-Agent Path Finding (MAPF) is a problem of finding a plan for agents to reach their desired locations without colliding. Distributed Multi-Agent Path Finder (DMAPF) solves the MAPF problem by decomposing a given MAPF problem instance into smaller subproblems and solve them in parallel. DMAPF works in rounds. Between two consecutive rounds, agents may migrate between two adjacent subproblems following their abstract plans, which are pre-computed, until all of them reach the areas that contain their desired locations. Previous works on DMAPF compute an abstract plan for each agent without the knowledge of other agents’ abstract plans, resulting in high congestion in some areas, especially those that act as corridors. The congestion negatively impacts the runtime of DMAPF and prevents it from being able to solve dense MAPF problems. In this paper, we (i) investigate the use of Uniform-Cost Search to mitigate the congestion. Additionally, we explore the use of several other techniques including (ii) using timeout estimation to preemptively stop solving and relax a subproblem when it is likely to get stuck; (iii) allowing a solving process to manage multiple subproblems – aimed to increase concurrency; and (iv) integrating with MAPF solvers from the Conflict-Based Search family. Experimental results show that our new system is several times faster than the previous ones; can solve larger and denser problems that were unsolvable before; and has better runtime than PBS and EECBS, which are state-of-the-art centralized suboptimal MAPF solvers, in problems with a large number of agents. 
    more » « less
  4. In this paper, we study the application of quasi-Newton methods for solving empirical risk minimization (ERM) problems defined over a large dataset. Traditional deterministic and stochastic quasi-Newton methods can be executed to solve such problems; however, it is known that their global convergence rate may not be better than first-order methods, and their local superlinear convergence only appears towards the end of the learning process. In this paper, we use an adaptive sample size scheme that exploits the superlinear convergence of quasi-Newton methods globally and throughout the entire learning process. The main idea of the proposed adaptive sample size algorithms is to start with a small subset of data points and solve their corresponding ERM problem within its statistical accuracy, and then enlarge the sample size geometrically and use the optimal solution of the problem corresponding to the smaller set as an initial point for solving the subsequent ERM problem with more samples. We show that if the initial sample size is sufficiently large and we use quasi-Newton methods to solve each subproblem, the subproblems can be solved superlinearly fast (after at most three iterations), as we guarantee that the iterates always stay within a neighborhood that quasi-Newton methods converge superlinearly. Numerical experiments on various datasets confirm our theoretical results and demonstrate the computational advantages of our method. 
    more » « less
  5. Memristors have recently received significant attention as device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at memristor terminals make them promising candidates to efficiently perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merits of memristor crossbars with the advantages of an operator splitting method, the alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The strength of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based power iteration method can further be applied to principal component analysis. The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence. 
    more » « less