Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Estrada, Ernesto (Ed.)Abstract The friendship paradox is the observation that the degrees of the neighbours of a node in any network will, on average, be greater than the degree of the node itself. In common parlance, your friends have more friends than you do. In this article, we develop the mathematical theory of the friendship paradox, both in general as well as for specific model networks, focusing not only on average behaviour but also on variation about the average and using generating function methods to calculate full distributions of quantities of interest. We compare the predictions of our theory with measurements on a large number of real-world network datasets and find remarkably good agreement. We also develop equivalent theory for the generalized friendship paradox, which compares characteristics of nodes other than degree to those of their neighbours.more » « less
-
null (Ed.)Belief propagation is a widely used message passing method for the solution of probabilistic models on networks such as epidemic models, spin models, and Bayesian graphical models, but it suffers from the serious shortcoming that it works poorly in the common case of networks that contain short loops. Here, we provide a solution to this long-standing problem, deriving a belief propagation method that allows for fast calculation of probability distributions in systems with short loops, potentially with high density, as well as giving expressions for the entropy and partition function, which are notoriously difficult quantities to compute. Using the Ising model as an example, we show that our approach gives excellent results on both real and synthetic networks, improving substantially on standard message passing methods. We also discuss potential applications of our method to a variety of other problems.more » « less
-
Peixoto, Tiago P (Ed.)Abstract Most empirical studies of complex networks do not return direct, error-free measurements of network structure. Instead, they typically rely on indirect measurements that are often error prone and unreliable. A fundamental problem in empirical network science is how to make the best possible estimates of network structure given such unreliable data. In this article, we describe a fully Bayesian method for reconstructing networks from observational data in any format, even when the data contain substantial measurement error and when the nature and magnitude of that error is unknown. The method is introduced through pedagogical case studies using real-world example networks, and specifically tailored to allow straightforward, computationally efficient implementation with a minimum of technical input. Computer code implementing the method is publicly available.more » « less
-
Message passing is a fundamental technique for performing calculations on networks and graphs with applications in physics, computer science, statistics, and machine learning, including Bayesian inference, spin models, satisfiability, graph partitioning, network epidemiology, and the calculation of matrix eigenvalues. Despite its wide use, however, it has long been recognized that the method has a fundamental flaw: It works poorly on networks that contain short loops. Loops introduce correlations that can cause the method to give inaccurate answers or to fail completely in the worst cases. Unfortunately, most real-world networks contain many short loops, which limits the usefulness of the message-passing approach. In this paper we demonstrate how to rectify this shortcoming and create message-passing methods that work on any network. We give 2 example applications, one to the percolation properties of networks and the other to the calculation of the spectra of sparse matrices.more » « less