skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Decentralized optimization of energy-water nexus based on a mixed-integer boundary compatible algorithm
The electric power distribution system (PDS) and the water distribution system (WDS) are coupled with each other through electricity-driven water facilities (EdWFs), such as pumps, water desalination plants, and wastewater treatment facilities. However, they are generally owned and operated by different utilities, and there does not exist an operator that possesses full information of both systems. As a result, centralized methods are not applicable for coordinating the operation of the two systems. This paper proposes a decentralized framework where the PDS and WDS operators solve their own operation problems, respectively, by sharing only limited information. Nevertheless, the boundary variables (i.e., the variables shared between two systems) are discontinuous due to their dependence on the on/off nature of EdWFs. Unfortunately, mature decentralized/distributed optimization algorithms like the alternating direction method of multipliers (ADMM) cannot guarantee convergence and optimality for a case like this. Therefore, this paper develops a novel algorithm that can guarantee convergence and optimality for the decentralized optimization of PDS and WDS based on a recently developed algorithm called the SD-GS-AL method. The SD-GS-AL method is a combination of the simplicial decomposition (SD), gauss–seidel (GS), and augmented Lagrangian (AL) methods, which can guarantee convergence and optimality for mixed-integer programs (MIPs) with continuous boundary variables. Nonetheless, the original SD-GS-AL algorithm does not work for the PDS-WDS coordination problem where the boundary variables are discontinuous. This paper modifies and improves the original SD-GS-AL algorithm by introducing update rules to discontinuous boundary variables (called the Auxiliary Variables Update step). The proposed mixed-integer boundary compatible (MIBC) SD-GS-AL algorithm has the following benefits: (1) it is capable of handling cases whose boundary variables are discontinuous with convergence and optimality guaranteed for mild assumptions, and (2) it only requires limited information exchange between PDS and WDS operators, which will help preserve the privacy of the two utilities and reduce the investment in building additional communication channels. Simulations on two coupled PDS and WDS test cases (Case 1: IEEE-13 node PDS and 11-node WDS, and Case 2: IEEE-37 node PDS and 36-node WDS) show that the proposed MIBC algorithm converges to the optimal solutions while the original SD-GS-AL does not converge for both test cases. The ADMM does not converge for the first test case while it converges to a sub-optimal solution, 63 % more than the optimal solution for the second test case.  more » « less
Award ID(s):
2124849
PAR ID:
10562827
Author(s) / Creator(s):
;
Publisher / Repository:
ELSEVIER
Date Published:
Journal Name:
Applied Energy
Volume:
359
Issue:
C
ISSN:
0306-2619
Page Range / eLocation ID:
122588
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The electric power distribution network (PDN) and the transportation network (TN) are generally operated/coordinated by different entities. However, they are coupled through electric vehicle charging stations (EVCSs). This paper proposes to coordinate the operation of the two systems via a fully decentralized framework where the PDN and TN operators solve their own operation problems independently, with only limited information exchange. Nevertheless, the operation problems of both systems are generally mixed-integer programs (MIP), for which mature algorithms like the alternating direction method of multipliers (ADMM) may not guarantee convergence. This paper applies a novel distributed optimization algorithm called the SD-GS-AL method, which is a combination of the simplicial decomposition, gauss-seidel, and augmented Lagrangian, which can guarantee convergence and optimality for MIPs. However, the original SD-GS-AL may be computationally inefficient for solving a complex engineering problem like the PDN-TN coordinated optimization investigated in this paper. To improve the computational efficiency, an enhanced SD-GS-AL method is proposed by redesigning the inner loop of the algorithm, which can automatically and intelligently determine the iteration number of the inner loop. Simulations on the test cases show the efficiency and efficacy of the proposed framework and algorithm. 
    more » « less
  2. We propose a decentralized, sequential and adaptive hypothesis test in sensor networks, which extends Chernoff’s test to a decentralized setting. We show that the proposed test achieves the same asymptotic optimality of the original one, minimizing the expected cost required to reach a decision plus the expected cost of making a wrong decision, when the observation cost per unit time tends to zero. We also show that the proposed test is parsimonious in terms of communications. Namely, in the regime of vanishing observation cost per unit time, the expected number of channel uses required by each sensor to complete the test converges to four. 
    more » « less
  3. null (Ed.)
    In this paper, we study communication-efficient decentralized training of large-scale machine learning models over a network. We propose and analyze SQuARM-SGD, a decentralized training algorithm, employing momentum and compressed communication between nodes regulated by a locally computable triggering rule. In SQuARM-SGD, each node performs a fixed number of local SGD (stochastic gradient descent) steps using Nesterov's momentum and then sends sparisified and quantized updates to its neighbors only when there is a significant change in its model parameters since the last time communication occurred. We provide convergence guarantees of our algorithm for strongly-convex and non-convex smooth objectives. We believe that ours is the first theoretical analysis for compressed decentralized SGD with momentum updates. We show that SQuARM-SGD converges at rate O(1/nT) for strongly-convex objectives, while for non-convex objectives it converges at rate O(1/√nT), thus matching the convergence rate of \emphvanilla distributed SGD in both these settings. We corroborate our theoretical understanding with experiments and compare the performance of our algorithm with the state-of-the-art, showing that without sacrificing much on the accuracy, SQuARM-SGD converges at a similar rate while saving significantly in total communicated bits. 
    more » « less
  4. Abstract An adaptive modified weak Galerkin method (AmWG) for an elliptic problem is studied in this article, in addition to its convergence and optimality. The modified weak Galerkin bilinear form is simplified without the need of the skeletal variable, and the approximation space is chosen as the discontinuous polynomial space as in the discontinuous Galerkin method. Upon a reliable residual‐baseda posteriorierror estimator, an adaptive algorithm is proposed together with its convergence and quasi‐optimality proved for the lowest order case. The primary tool is to bridge the connection between the modified weak Galerkin method and the Crouzeix–Raviart nonconforming finite element. Unlike the traditional convergence analysis for methods with a discontinuous polynomial approximation space, the convergence of AmWG is penalty parameter free. Numerical results are presented to support the theoretical results. 
    more » « less
  5. For obtaining optimal first-order convergence guarantees for stochastic optimization, it is necessary to use a recurrent data sampling algorithm that samples every data point with sufficient frequency. Most commonly used data sampling algorithms (e.g., i.i.d., MCMC, random reshuffling) are indeed recurrent under mild assumptions. In this work, we show that for a particular class of stochastic optimization algorithms, we do not need any further property (e.g., independence, exponential mixing, and reshuffling) beyond recurrence in data sampling to guarantee optimal rate of first-order convergence. Namely, using regularized versions of Minimization by Incremental Surrogate Optimization (MISO), we show that for non-convex and possibly non-smooth objective functions with constraints, the expected optimality gap converges at an optimal rate $$O(n^{-1/2})$$ under general recurrent sampling schemes. Furthermore, the implied constant depends explicitly on the ’speed of recurrence’, measured by the expected amount of time to visit a data point, either averaged (’target time’) or supremized (’hitting time’) over the starting locations. We discuss applications of our general framework to decentralized optimization and distributed non-negative matrix factorization. 
    more » « less