skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Scalable Distributed Dynamical Systems Approach to Compute the Strongly Connected Components and Diameter of Networks
Finding strongly connected components (SCCs) and the diameter of a directed network play a key role in a variety of discrete optimization problems, and subsequently, machine learning and control theory problems. On the one hand, SCCs are used in solving the 2-satisfiability problem, which has applications in clustering, scheduling, and visualization. On the other hand, the diameter has applications in network learning and discovery problems enabling efficient internet routing and searches, as well as identifying faults in the power grid. In this paper, we leverage consensus-based principles to find the SCCs in a scalable and distributed fashion with a computational complexity of O(Ddmaxin−degree) , where D is the (finite) diameter of the network and dmaxin−degree is the maximum in-degree of the network. Additionally, we prove that our algorithm terminates in D+1 iterations, which allows us to retrieve the diameter of the network. We illustrate the performance of our algorithm on several random networks, including Erdö-Rényi, Barabási-Albert, and Watts-Strogatz networks.  more » « less
Award ID(s):
1932620 1936578 1936624
PAR ID:
10380954
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
ISSN:
0018-9286
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The recent deep learning revolution has created enormous opportunities for accelerating compute capabilities in the context of physics-based simulations. In this article, we propose EikoNet, a deep learning approach to solving the Eikonal equation, which characterizes the first-arrival-time field in heterogeneous 3-D velocity structures. Our grid-free approach allows for rapid determination of the travel time between any two points within a continuous 3-D domain. These travel time solutions are allowed to violate the differential equation--which casts the problem as one of optimization--with the goal of finding network parameters that minimize the degree to which the equation is violated. In doing so, the method exploits the differentiability of neural networks to calculate the spatial gradients analytically, meaning that the network can be trained on its own without ever needing solutions from a finite-difference algorithm. EikoNet is rigorously tested on several velocity models and sampling methods to demonstrate robustness and versatility. Training and inference are highly parallelized, making the approach well-suited for GPUs. EikoNet has low memory overhead and further avoids the need for travel-time lookup tables. The developed approach has important applications to earthquake hypocenter inversion, ray multipathing, and tomographic modeling, as well as to other fields beyond seismology where ray tracing is essential. 
    more » « less
  2. Bonomi, Silvia; Galletta, Letterio; Rivière, Etienne; Schiavoni, Valerio (Ed.)
    It has been shown that one can design distributed algorithms that are (nearly) singularly optimal, meaning they simultaneously achieve optimal time and message complexity (within polylogarithmic factors), for several fundamental global problems such as broadcast, leader election, and spanning tree construction, under the KT₀ assumption. With this assumption, nodes have initial knowledge only of themselves, not their neighbors. In this case the time and message lower bounds are Ω(D) and Ω(m), respectively, where D is the diameter of the network and m is the number of edges, and there exist (even) deterministic algorithms that simultaneously match these bounds. On the other hand, under the KT₁ assumption, whereby each node has initial knowledge of itself and the identifiers of its neighbors, the situation is not clear. For the KT₁ CONGEST model (where messages are of small size), King, Kutten, and Thorup (KKT) showed that one can solve several fundamental global problems (with the notable exception of BFS tree construction) such as broadcast, leader election, and spanning tree construction with Õ(n) message complexity (n is the network size), which can be significantly smaller than m. Randomization is crucial in obtaining this result. While the message complexity of the KKT result is near-optimal, its time complexity is Õ(n) rounds, which is far from the standard lower bound of Ω(D). An important open question is whether one can achieve singular optimality for the above problems in the KT₁ CONGEST model, i.e., whether there exists an algorithm running in Õ(D) rounds and Õ(n) messages. Another important and related question is whether the fundamental BFS tree construction can be solved with Õ(n) messages (regardless of the number of rounds as long as it is polynomial in n) in KT₁. In this paper, we show that in the KT₁ LOCAL model (where message sizes are not restricted), singular optimality is achievable. Our main result is that all global problems, including BFS tree construction, can be solved in Õ(D) rounds and Õ(n) messages, where both bounds are optimal up to polylogarithmic factors. Moreover, we show that this can be achieved deterministically. 
    more » « less
  3. This paper focuses on showing time-message trade-offs in distributed algorithms for fundamental problems such as leader election, broadcast, spanning tree (ST), minimum spanning tree (MST), minimum cut, and many graph verification problems. We consider the synchronous CONGEST distributed computing model and assume that each node has initial knowledge of itself and the identifiers of its neighbors - the so-called KT_1 model - a well-studied model that also naturally arises in many applications. Recently, it has been established that one can obtain (almost) singularly optimal algorithms, i.e., algorithms that have simultaneously optimal time and message complexity (up to polylogarithmic factors), for many fundamental problems in the standard KT_0 model (where nodes have only local knowledge of themselves and not their neighbors). The situation is less clear in the KT_1 model. In this paper, we present several new distributed algorithms in the KT_1 model that trade off between time and message complexity. Our distributed algorithms are based on a uniform and general approach which involves constructing a sparsified spanning subgraph of the original graph - called a danner - that trades off the number of edges with the diameter of the sparsifier. In particular, a key ingredient of our approach is a distributed randomized algorithm that, given a graph G and any delta in [0,1], with high probability constructs a danner that has diameter O~(D + n^{1-delta}) and O~(min{m,n^{1+delta}}) edges in O~(n^{1-delta}) rounds while using O~(min{m,n^{1+delta}}) messages, where n, m, and D are the number of nodes, edges, and the diameter of G, respectively. Using our danner construction, we present a family of distributed randomized algorithms for various fundamental problems that exhibit a trade-off between message and time complexity and that improve over previous results. Specifically, we show the following results (all hold with high probability) in the KT_1 model, which subsume and improve over prior bounds in the KT_1 model (King et al., PODC 2014 and Awerbuch et al., JACM 1990) and the KT_0 model (Kutten et al., JACM 2015, Pandurangan et al., STOC 2017 and Elkin, PODC 2017): 1) Leader Election, Broadcast, and ST. These problems can be solved in O~(D+n^{1-delta}) rounds using O~(min{m,n^{1+delta}}) messages for any delta in [0,1]. 2) MST and Connectivity. These problems can be solved in O~(D+n^{1-delta}) rounds using O~(min{m,n^{1+delta}}) messages for any delta in [0,0.5]. In particular, for delta = 0.5 we obtain a distributed MST algorithm that runs in optimal O~(D+sqrt{n}) rounds and uses O~(min{m,n^{3/2}}) messages. We note that this improves over the singularly optimal algorithm in the KT_0 model that uses O~(D+sqrt{n}) rounds and O~(m) messages. 3) Minimum Cut. O(log n)-approximate minimum cut can be solved in O~(D+n^{1-delta}) rounds using O~(min{m,n^{1+delta}}) messages for any delta in [0,0.5]. 4) Graph Verification Problems such as Bipartiteness, Spanning Subgraph etc. These can be solved in O~(D+n^{1-delta}) rounds using O~(min{m,n^{1+delta}}) messages for any delta in [0,0.5]. 
    more » « less
  4. Kumar, Amit; Ron-Zewi, Noga (Ed.)
    High dimensional expanders (HDXs) are a hypergraph generalization of expander graphs. They are extensively studied in the math and TCS communities due to their many applications. Like expander graphs, HDXs are especially interesting for applications when they are bounded degree, namely, if the number of edges adjacent to every vertex is bounded. However, only a handful of constructions are known to have this property, all of which rely on algebraic techniques. In particular, no random or combinatorial construction of bounded degree high dimensional expanders is known. As a result, our understanding of these objects is limited. The degree of an i-face in an HDX is the number of (i+1)-faces that contain it. In this work we construct complexes whose higher dimensional faces have bounded degree. This is done by giving an elementary and deterministic algorithm that takes as input a regular k-dimensional HDX X and outputs another regular k-dimensional HDX X̂ with twice as many vertices. While the degree of vertices in X̂ grows, the degree of the (k-1)-faces in X̂ stays the same. As a result, we obtain a new "algebra-free" construction of HDXs whose (k-1)-face degree is bounded. Our construction algorithm is based on a simple and natural generalization of the expander graph construction by Bilu and Linial [Yehonatan Bilu and Nathan Linial, 2006], which build expander graphs using lifts coming from edge signings. Our construction is based on local lifts of high dimensional expanders, where a local lift is a new complex whose top-level links are lifts of the links of the original complex. We demonstrate that a local lift of an HDX is also an HDX in many cases. In addition, combining local lifts with existing bounded degree constructions creates new families of bounded degree HDXs with significantly different links than before. For every large enough D, we use this technique to construct families of bounded degree HDXs with links that have diameter ≥ D. 
    more » « less
  5. While deep neural networks (DNNs) have achieved state-of-the-art results in many fields, they are typically over-parameterized. Parameter redundancy, in turn, leads to inefficiency. Sparse signal recovery (SSR) techniques, on the other hand, find compact solutions to over-complete linear problems. Therefore, a logical step is to draw the connection between SSR and DNNs. In this paper, we explore the application of iterative reweighting methods popular in SSR to learning efficient DNNs. By efficient, we mean sparse networks that require less computation and storage than the original, dense network. We propose a reweighting framework to learn sparse connections within a given architecture without biasing the optimization process, by utilizing the affine scaling transformation strategy. The resulting algorithm, referred to as Sparsity-promoting Stochastic Gradient Descent (SSGD), has simple gradient-based updates which can be easily implemented in existing deep learning libraries. We demonstrate the sparsification ability of SSGD on image classification tasks and show that it outperforms existing methods on the MNIST and CIFAR-10 datasets. 
    more » « less