skip to main content


Title: Decentralized computation of effective resistances and acceleration of consensus algorithms
The effective resistance between a pair of nodes in a weighted undirected graph is defined as the potential difference induced between them when a unit current is injected at the first node and extracted at the second node, treating edge weights as the conductance values of edges. The effective resistance is a key quantity of interest in many applications and fields including solving linear systems, Markov Chains and continuous-time averaging networks. We develop an efficient linearly convergent distributed algorithm for computing effective resistances and demonstrate its performance through numerical studies. We also apply our algorithm to the consensus problem where the aim is to compute the average of node values in a distributed manner. We show that the distributed algorithm we developed for effective resistances can be used to accelerate the convergence of the classical consensus iterations considerably by a factor depending on the network structure.  more » « less
Award ID(s):
1723085 1635106 1400217
NSF-PAR ID:
10055946
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Global Conference on Signal and Information Processing (GlobalSIP)
Page Range / eLocation ID:
538 to 542
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in \Otil(n^{5/3 }m^{1/3}) time\footnote{The \Otil(\cdot) notation hides \poly(\log n) factors}. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^\omega). For the special case of unweighted graphs, this improves upon the best previously known running time of \tilde{O}(\min\{n^{\omega},m\sqrt{n},m^{4/3}\}) for m >> n^{7/4} (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute \eps-approximate effective resistances for a set SS of vertex pairs via approximate Schur complements in \Otil(m+(n + |S|)\eps^{-2}) time, without using the Johnson-Lindenstrauss lemma which requires \Otil( \min\{(m + |S|)\eps^{-2}, m+n\eps^{-4} +|S|\eps^{-2}\}) time. We combine this approximation procedure with an error correction procedure for handing edges where our estimate isn't sufficiently accurate. 
    more » « less
  2. null (Ed.)
    Initiated chemical vapor deposition (iCVD) was used to coat two porous substrates (i.e., hydrophilic cellulose acetate (CA) and hydrophobic polytetrafluoroethylene (PTFE)) with a crosslinked fluoropolymer to improve membrane wetting resistance. The coated CA membrane was superhydrophobic and symmetric. The coated PTFE membrane was hydrophobic and asymmetric, with smaller pore size and lower porosity on the top surface than on the bottom surface. Membrane performance was tested in membrane distillation experiments with (1) a high-salinity feed solution and (2) a surfactant-containing feed solution. In both cases, the coated membranes had higher wetting resistance than the uncoated membranes. Notably, wetting resistances were better predicted by LEP distributions than by minimum LEP values. When LEP distributions were skewed towards high LEP values (i.e., when small pores with high LEP were greater in number), significant (measurable) salt passage did not occur. For the high-salinity feed solution, the coated PTFE membrane had greater wetting resistance than the coated CA membrane; thus, reduced surface pore size/porosity (which may reduce intrapore scaling) was more effective than increased surface hydrophobicity (which may reduce surface nucleation) in preventing scaling-induced wetting. Reduced pore size/porosity was equally as effective as increased hydrophobicity in resisting surfactant-induced wetting. However, reduced porosity can negatively impact water flux; this represents a permeability/wetting resistance tradeoff in membrane distillation – especially for high-salinity applications. Membrane and/or membrane coating properties must be optimized to overcome this permeability/wetting resistance tradeoff and make MD viable for the treatment of challenging streams. Then, increasing hydrophobicity may not be necessary to impart high wetting resistance to porous membranes. These results are important for future membrane design, especially as manufacturers seek to replace perfluorinated materials with environmentally friendly alternatives. 
    more » « less
  3. This work studies our recently developed decentralized algorithm, decentralized alternating projected gradient descent algorithm, called Dec-AltProjGDmin, for solving the following low-rank (LR) matrix recovery problem: recover an LR matrix from independent column-wise linear projections (LR column-wise Compressive Sensing). In recent work, we presented constructive convergence guarantees for Dec-AltProjGDmin under simple assumptions. By "constructive", we mean that the convergence time lower bound is provided for achieving any error level ε. However, our guarantee was stated for the equal neighbor consensus algorithm (at each iteration, each node computes the average of the data of all its neighbors) while most existing results do not assume the use of a specific consensus algorithm, but instead state guarantees in terms of the weights matrix eigenvalues. In order to compare with these results, we first modify our result to be in this form. Our second and main contribution is a theoretical and experimental comparison of our new result with the best existing one from the decentralized GD literature that also provides a convergence time bound for values of ε that are large enough. The existing guarantee is for a different problem setting and holds under different assumptions than ours and hence the comparison is not very clear cut. However, we are not aware of any other provably correct algorithms for decentralized LR matrix recovery in any other settings either. 
    more » « less
  4. One of the major challenges in parallelization is the difficulty of improving application scalability with conventional techniques. HPX provides efficient scalable parallelism by significantly reducing node starvation and effective latencies while controlling the overheads. In this paper, we present a new highly scalable parallel distributed N-Body application using a future-based algorithm, which is implemented with HPX. The main difference between this algorithm and prior art is that a future-based request buffer is used between different nodes and along each spatial direction to send/receive data to/from the remote nodes, which helps removing synchronization barriers. HPX provides an asynchronous programming model which results in improving the parallel performance. The results of using HPX for parallelizing Octree construction on one node and the force computation on the distributed nodes show the scalability improvement on an average by about 45% compared to an equivalent OpenMP implementation and 28% compared to a hybrid implementation (MPI+OpenMP) [1] respectively for one billion particles running on up to 128 nodes with 20 cores per each. 
    more » « less
  5. Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an effective distributed algorithm for GNN training. However, existing distributed GNN training methods impose either excessive communication costs or large memory overheads that hinders their scalability. To overcome these issues, we propose a communication-efficient distributed GNN training technique named (LLCG). To reduce the communication and memory overhead, each local machine in LLCG first trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging. However, ignoring node dependency could result in significant performance degradation. To solve the performance degradation, we propose to apply on the server to refine the locally learned models. We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error. However, this residual error can be eliminated by utilizing the proposed global corrections to entail fast convergence rate. Extensive experiments on real-world datasets show that LLCG can significantly improve the efficiency without hurting the performance. One-sentence Summary: We propose LLCG a communication efficient distributed algorithm for training GNNs. 
    more » « less