skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A dynamic process reference model for sparse networks with reciprocity
Many social and other networks exhibit stable size scaling relationships, such that features such as mean degree or reciprocation rates change slowly or are approximately constant as the number of vertices increases. Statistical network models built on top of simple Bernoulli baseline (or reference) measures often behave unrealistically in this respect, leading to the development of sparse reference models that preserve features such as mean degree scaling. In this paper, we generalize recent work on the micro-foundations of such reference models to the case of sparse directed graphs with non-vanishing reciprocity, providing a dynamic process interpretation of the emergence of stable macroscopic behavior.  more » « less
Award ID(s):
1826589 1361425 1939237
PAR ID:
10184450
Author(s) / Creator(s):
Date Published:
Journal Name:
The Journal of Mathematical Sociology
ISSN:
0022-250X
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The raindrop size distribution (DSD) is vital for applications such as quantitative precipitation estimation, understanding microphysical processes, and validation/improvement of two-moment bulk microphysical schemes. We trace the history of the DSD representation and its linkage to polarimetric radar observables from functional forms (exponential, gamma, and generalized gamma models) and its normalization (un-normalized, single/double-moment scaling normalized). The four-parameter generalized gamma model is a good candidate for the optimal representation of the DSD variability. A radar-based disdrometer was found to describe the five archetypical shapes (from Montreal, Canada) consisting of drizzle, the larger precipitation drops and the ‘S’-shaped curvature that occurs frequently in between the drizzle and the larger-sized precipitation. Similar ‘S’-shaped DSDs were reproduced by combining the disdrometric measurements of small-sized drops from an optical array probe and large-sized drops from 2DVD. A unified theory based on the double-moment scaling normalization is described. The theory assumes the multiple power law among moments and DSDs are scaling normalized by the two characteristic parameters which are expressed as a combination of any two moments. The normalized DSDs are remarkably stable. Thus, the mean underlying shape is fitted to the generalized gamma model from which the ‘optimized’ two shape parameters are obtained. The other moments of the distribution are obtained as the product of power laws of the reference moments M3 and M6 along with the two shape parameters. These reference moments can be from dual-polarimetric measurements: M6 from the attenuation-corrected reflectivity and M3 from attenuation-corrected differential reflectivity and the specific differential propagation phase. Thus, all the moments of the distribution can be calculated, and the microphysical evolution of the DSD can be inferred. This is one of the major findings of this article. 
    more » « less
  2. Understanding subsurface heterogeneity is critical for predicting groundwater flow, pollutant transport, and managing water resources. While traditional methods often rely on sparse borehole or geophysical data, this study explores a spectral analysis approach to infer aquifer structure from groundwater level fluctuations. We use a coupled surface–subsurface flow model to simulate hydraulic head time series in synthetic aquifers with bimodal hydraulic conductivity distributions. The frequency characteristics of these head fluctuations are analyzed to compute the scaling exponent (defined as the slope of the log-power spectral density of head fluctuations versus log-frequency) and its spatial gradient magnitude. Results show that areas with significant heterogeneity, such as transitions between high- and low-permeability zones, exhibit strong spatial gradients in the scaling exponent. These features can be used to delineate unsaturated zones, groundwater flow systems, and aquifer heterogeneity. By testing four scenarios with different hydraulic conductivity contrasts, we demonstrate that this method is sensitive to aquifer configuration. Our findings suggest that the gradient magnitude of the scaling exponent may serve as a diagnostic tool for characterizing heterogeneity in groundwater models and has the potential for future applications in estimating permeability distributions from monitored groundwater level data. 
    more » « less
  3. null (Ed.)
    Biological neural networks face a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves averaging over large redundant networks of N neurons resulting in coding errors that decrease classically as the square root of N. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli achieving a superclassical scaling in which coding errors decrease as 1/N. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communication delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean-field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience. 
    more » « less
  4. null (Ed.)
    We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising p-spin glass model, and (b) finding a large independent set in a sparse Erdos-Renyi graph. Two families of algorithms are considered: (a) low-degree polynomials of the input-a general framework that captures methods such as approximate message passing and local algorithms on sparse graphs, among others; and (b) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm (applicable only for the spherical p-spin glass Hamiltonian). We show that neither family of algorithms can produce nearly optimal solutions with high probability. Our proof uses the fact that both models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objective values are above a certain threshold are either close or far from each other. The crux of our proof is the stability of both algorithms: a small perturbation of the input induces a small perturbation of the output. By an interpolation argument, such a stable algorithm cannot overcome the OGP barrier. The stability of the Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials is established using concepts from Gaussian and Boolean Fourier analysis, including noise sensitivity, hypercontractivity, and total influence. 
    more » « less
  5. Immersive virtual tours based on 360-degree cameras, showing famous outdoor scenery, are becoming more and more desirable due to travel costs, pandemics and other constraints. To feel immersive, a user must receive the view accurately corresponding to her position and orientation in the virtual space when she moves inside, and this requires cameras’ orientations to be known. Outdoor tour contexts have numerous, ultra-sparse cameras deployed across a wide area, making camera pose estimation challenging. As a result, pose estimation techniques like SLAM, which require mobile or dense cameras, are not applicable. In this paper we present a novel strategy called 360ViewPET, which automatically estimates the relative poses of two stationary, ultra-sparse (15 meters apart) 360-degree cameras using one equirectangular image taken by each camera. Our experiments show that it achieves accurate pose estimation, with a mean error as low as 0.9 degree 
    more » « less