skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A dynamic process reference model for sparse networks with reciprocity
Many social and other networks exhibit stable size scaling relationships, such that features such as mean degree or reciprocation rates change slowly or are approximately constant as the number of vertices increases. Statistical network models built on top of simple Bernoulli baseline (or reference) measures often behave unrealistically in this respect, leading to the development of sparse reference models that preserve features such as mean degree scaling. In this paper, we generalize recent work on the micro-foundations of such reference models to the case of sparse directed graphs with non-vanishing reciprocity, providing a dynamic process interpretation of the emergence of stable macroscopic behavior.  more » « less
Award ID(s):
1826589 1361425 1939237
PAR ID:
10184450
Author(s) / Creator(s):
Date Published:
Journal Name:
The Journal of Mathematical Sociology
ISSN:
0022-250X
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The raindrop size distribution (DSD) is vital for applications such as quantitative precipitation estimation, understanding microphysical processes, and validation/improvement of two-moment bulk microphysical schemes. We trace the history of the DSD representation and its linkage to polarimetric radar observables from functional forms (exponential, gamma, and generalized gamma models) and its normalization (un-normalized, single/double-moment scaling normalized). The four-parameter generalized gamma model is a good candidate for the optimal representation of the DSD variability. A radar-based disdrometer was found to describe the five archetypical shapes (from Montreal, Canada) consisting of drizzle, the larger precipitation drops and the ‘S’-shaped curvature that occurs frequently in between the drizzle and the larger-sized precipitation. Similar ‘S’-shaped DSDs were reproduced by combining the disdrometric measurements of small-sized drops from an optical array probe and large-sized drops from 2DVD. A unified theory based on the double-moment scaling normalization is described. The theory assumes the multiple power law among moments and DSDs are scaling normalized by the two characteristic parameters which are expressed as a combination of any two moments. The normalized DSDs are remarkably stable. Thus, the mean underlying shape is fitted to the generalized gamma model from which the ‘optimized’ two shape parameters are obtained. The other moments of the distribution are obtained as the product of power laws of the reference moments M3 and M6 along with the two shape parameters. These reference moments can be from dual-polarimetric measurements: M6 from the attenuation-corrected reflectivity and M3 from attenuation-corrected differential reflectivity and the specific differential propagation phase. Thus, all the moments of the distribution can be calculated, and the microphysical evolution of the DSD can be inferred. This is one of the major findings of this article. 
    more » « less
  2. null (Ed.)
    Biological neural networks face a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves averaging over large redundant networks of N neurons resulting in coding errors that decrease classically as the square root of N. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli achieving a superclassical scaling in which coding errors decrease as 1/N. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communication delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean-field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience. 
    more » « less
  3. null (Ed.)
    We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising p-spin glass model, and (b) finding a large independent set in a sparse Erdos-Renyi graph. Two families of algorithms are considered: (a) low-degree polynomials of the input-a general framework that captures methods such as approximate message passing and local algorithms on sparse graphs, among others; and (b) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm (applicable only for the spherical p-spin glass Hamiltonian). We show that neither family of algorithms can produce nearly optimal solutions with high probability. Our proof uses the fact that both models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objective values are above a certain threshold are either close or far from each other. The crux of our proof is the stability of both algorithms: a small perturbation of the input induces a small perturbation of the output. By an interpolation argument, such a stable algorithm cannot overcome the OGP barrier. The stability of the Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials is established using concepts from Gaussian and Boolean Fourier analysis, including noise sensitivity, hypercontractivity, and total influence. 
    more » « less
  4. Immersive virtual tours based on 360-degree cameras, showing famous outdoor scenery, are becoming more and more desirable due to travel costs, pandemics and other constraints. To feel immersive, a user must receive the view accurately corresponding to her position and orientation in the virtual space when she moves inside, and this requires cameras’ orientations to be known. Outdoor tour contexts have numerous, ultra-sparse cameras deployed across a wide area, making camera pose estimation challenging. As a result, pose estimation techniques like SLAM, which require mobile or dense cameras, are not applicable. In this paper we present a novel strategy called 360ViewPET, which automatically estimates the relative poses of two stationary, ultra-sparse (15 meters apart) 360-degree cameras using one equirectangular image taken by each camera. Our experiments show that it achieves accurate pose estimation, with a mean error as low as 0.9 degree 
    more » « less
  5. The molecular features that dictate interactions between functionalized nanoparticles and biomolecules are not well understood. This is in part because for highly charged nanoparticles in solution, establishing a clear connection between the molecular features of surface ligands and common experimental observables such as ζ potential requires going beyond the classical models based on continuum and mean field models. Motivated by these considerations, molecular dynamics simulations are used to probe the electrostatic properties of functionalized gold nanoparticles and their interaction with a charged peptide in salt solutions. Counterions are observed to screen the bare ligand charge to a significant degree even at the moderate salt concentration of 50 mM. As a result, the apparent charge density and ζ potential are largely insensitive to the bare ligand charge densities, which fall in the range of ligand densities typically measured experimentally for gold nanoparticles. While this screening effect was predicted by classical models such as the Manning condensation theory, the magnitudes of the apparent surface charge from microscopic simulations and mean-field models are significantly different. Moreover, our simulations found that the chemical features of the surface ligand ( e.g. , primary vs. quaternary amines, heterogeneous ligand lengths) modulate the interfacial ion and water distributions and therefore the interfacial potential. The importance of interfacial water is further highlighted by the observation that introducing a fraction of hydrophobic ligands enhances the strength of electrostatic binding of the charged peptide. Finally, the simulations highlight that the electric double layer is perturbed upon binding interactions. As a result, it is the bare charge density rather than the apparent charge density or ζ potential that better correlates with binding affinity of the nanoparticle to a charged peptide. Overall, our study highlights the importance of molecular features of the nanoparticle/water interface and underscores a set of design rules for the modulation of electrostatic driven interactions at nano/bio interfaces. 
    more » « less