It is essential to study the robustness and centrality of interdependent networks for building reliable interdependent systems. Here, we consider a nonlinear load-capacity cascading failure model on interdependent networks, where the initial load distribution is not random, as usually assumed, but determined by the influence of each node in the interdependent network. The node influence is measured by an automated entropy-weighted multi-attribute algorithm that takes into account both different centrality measures of nodes and the interdependence of node pairs, then averaging for not only the node itself but also its nearest neighbors and next-nearest neighbors. The resilience of interdependent networks under such a more practical and accurate setting is thoroughly investigated for various network parameters, as well as how nodes from different layers are coupled and the corresponding coupling strength. The results thereby can help better monitoring interdependent systems.
- Award ID(s):
- 1918656
- PAR ID:
- 10319137
- Date Published:
- Journal Name:
- Applied Network Science
- Volume:
- 6
- Issue:
- 1
- ISSN:
- 2364-8228
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
An influential node identification method considering multi-attribute decision fusion and dependency
Abstract -
Althouse, Benjamin Muir (Ed.)Disease epidemic outbreaks on human metapopulation networks are often driven by a small number of superspreader nodes, which are primarily responsible for spreading the disease throughout the network. Superspreader nodes typically are characterized either by their locations within the network, by their degree of connectivity and centrality, or by their habitat suitability for the disease, described by their reproduction number ( R ). Here we introduce a model that considers simultaneously the effects of network properties and R on superspreaders, as opposed to previous research which considered each factor separately. This type of model is applicable to diseases for which habitat suitability varies by climate or land cover, and for direct transmitted diseases for which population density and mitigation practices influences R . We present analytical models that quantify the superspreader capacity of a population node by two measures: probability-dependent superspreader capacity, the expected number of neighboring nodes to which the node in consideration will randomly spread the disease per epidemic generation, and time-dependent superspreader capacity, the rate at which the node spreads the disease to each of its neighbors. We validate our analytical models with a Monte Carlo analysis of repeated stochastic Susceptible-Infected-Recovered (SIR) simulations on randomly generated human population networks, and we use a random forest statistical model to relate superspreader risk to connectivity, R , centrality, clustering, and diffusion. We demonstrate that either degree of connectivity or R above a certain threshold are sufficient conditions for a node to have a moderate superspreader risk factor, but both are necessary for a node to have a high-risk factor. The statistical model presented in this article can be used to predict the location of superspreader events in future epidemics, and to predict the effectiveness of mitigation strategies that seek to reduce the value of R , alter host movements, or both.more » « less
-
This paper considers the Byzantine consensus problem for nodes with binary inputs. The nodes are interconnected by a network represented as an undirected graph, and the system is assumed to be synchronous. Under the classical point-to-point communication model, it is well-known that the following two conditions are both necessary and sufficient to achieve Byzantine consensus among n nodes in the presence of up to ƒ Byzantine faulty nodes: n & 3 #8805; 3 ≥ ƒ+ 1 and vertex connectivity at least 2 ƒ + 1. In the classical point-to-point communication model, it is possible for a faulty node to equivocate, i.e., transmit conflicting information to different neighbors. Such equivocation is possible because messages sent by a node to one of its neighbors are not overheard by other neighbors. This paper considers the local broadcast model. In contrast to the point-to-point communication model, in the local broadcast model, messages sent by a node are received identically by all of its neighbors. Thus, under the local broadcast model, attempts by a node to send conflicting information can be detected by its neighbors. Under this model, we show that the following two conditions are both necessary and sufficient for Byzantine consensus: vertex connectivity at least ⌋ 3 fƒ / 2 ⌊ + 1 and minimum node degree at least 2 ƒ. Observe that the local broadcast model results in a lower requirement for connectivity and the number of nodes n, as compared to the point-to-point communication model. We extend the above results to a hybrid model that allows some of the Byzantine faulty nodes to equivocate. The hybrid model bridges the gap between the point-to-point and local broadcast models, and helps to precisely characterize the trade-off between equivocation and network requirements.more » « less
-
null (Ed.)Filter banks on graphs are shown to be useful for analyzing data defined over networks, as they decompose a graph signal into components with low variation and high variation. Based on recent node-asynchronous implementation of graph filters, this study proposes an asynchronous implementation of filter banks on graphs. In the proposed algorithm nodes follow a randomized collect-compute-broadcast scheme: if a node is in the passive stage it collects the data sent by its incoming neighbors and stores only the most recent data. When a node gets into the active stage at a random time instance, it does the necessary filtering computations locally, and broadcasts a state vector to its outgoing neighbors. When the underlying filters (of the filter bank) are rational functions with the same denominator, the proposed filter bank implementation does not require additional communication between the neighboring nodes. However, computations done by a node increase linearly with the number of filters in the bank. It is also proven that the proposed asynchronous implementation converges to the desired output of the filter bank in the mean-squared sense under mild stability conditions. The convergence is verified also with numerical experiments.more » « less
-
Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks, and so forth. Different
centrality measures have been proposed to capture the notion of node importance. For example, thecenter of a graph is a node that minimizes the maximum distance to any other node (the latter distance is theradius of the graph). Themedian of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, thebetweenness centrality of a nodew measures the fraction of shortest paths that havew as an intermediate node. Finally, thereach centrality of a nodew is the smallest distancer such that anys -t shortest path passing throughw has eithers ort in the ball of radiusr aroundw .The fastest known algorithms to compute the center and the median of a graph and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number
n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e., algorithms with running time Õ(n3-δ) for some constant δ > 0.1 We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. We show that Radius, Median, and Betweenness Centrality are
equivalent under subcubic reductions to APSP, i.e., that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any finite factor. Thus, the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP required essentially cubic time.On the positive side, our reductions for Reach Centrality imply an improved Õ(Mnω)-time algorithm for this problem in case of non-negative integer weights upper bounded by
M , where ω is a fast matrix multiplication exponent.