skip to main content


Title: Maxwell’s Demon: Controlling Entropy via Discrete Ricci Flow over Networks
In this work, we propose to utilize discrete graph Ricci flow to alter network entropy through feedback control. Given such feedback input can “reverse” entropic changes, we adapt the moniker of Maxwell’s Demon to motivate our approach. In particular, it has been recently shown that Ricci curvature from geometry is intrinsically connected to Boltzmann entropy as well as functional robustness of networks or the ability to maintain functionality in the presence of random fluctuations. From this, the discrete Ricci flow provides a natural avenue to “rewire” a particular network’s underlying geometry to improve throughout and resilience. Due to the real-world setting for which one may be interested in imposing nonlinear constraints amongst particular agents to understand the network dynamic evolution, controlling discrete Ricci flow may be necessary (e.g., we may seek to understand the entropic dynamics and curvature “flow” between two networks as opposed to solely curvature shrinkage). In turn, this can be formulated as a natural control problem for which we employ feedback control towards discrete Ricci-based flow and show that under certain discretization, namely Ollivier-Ricci curvature, one can show stability via Lyapunov analysis. We conclude with preliminary results with remarks on potential applications that will be a subject of future work.  more » « less
Award ID(s):
1749937
NSF-PAR ID:
10132942
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Conference on Network Science
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Deciphering the non-trivial interactions and mechanisms driving the evolution of time-varying complex networks (TVCNs) plays a crucial role in designing optimal control strategies for such networks or enhancing their causal predictive capabilities. In this paper, we advance the science of TVCNs by providing a mathematical framework through which we can gauge how local changes within a complex weighted network affect its global properties. More precisely, we focus on unraveling unknown geometric properties of a network and determine its implications on detecting phase transitions within the dynamics of a TVCN. In this vein, we aim at elaborating a novel and unified approach that can be used to depict the relationship between local interactions in a complex network and its global kinetics. We propose a geometric-inspired framework to characterize the network’s state and detect a phase transition between different states, to infer the TVCN’s dynamics. A phase of a TVCN is determined by its Forman–Ricci curvature property. Numerical experiments show the usefulness of the proposed curvature formalism to detect the transition between phases within artificially generated networks. Furthermore, we demonstrate the effectiveness of the proposed framework in identifying the phase transition phenomena governing the training and learning processes of artificial neural networks. Moreover, we exploit this approach to investigate the phase transition phenomena in cellular re-programming by interpreting the dynamics of Hi-C matrices as TVCNs and observing singularity trends in the curvature network entropy. Finally, we demonstrate that this curvature formalism can detect a political change. Specifically, our framework can be applied to the US Senate data to detect a political change in the United States of America after the 1994 election, as discussed by political scientists.

     
    more » « less
  2. Abstract

    Although brain functionality is often remarkably robust to lesions and other insults, it may be fragile when these take place in specific locations. Previous attempts to quantify robustness and fragility sought to understand how the functional connectivity of brain networks is affected by structural changes, using either model-based predictions or empirical studies of the effects of lesions. We advance a geometric viewpoint relying on a notion of network curvature, the so-called Ollivier-Ricci curvature. This approach has been proposed to assess financial market robustness and to differentiate biological networks of cancer cells from healthy ones. Here, we apply curvature-based measures to brain structural networks to identify robust and fragile brain regions in healthy subjects. We show that curvature can also be used to track changes in brain connectivity related to age and autism spectrum disorder (ASD), and we obtain results that are in agreement with previous MRI studies.

     
    more » « less
  3. As hyperscalers such as Google, Microsoft, and Amazon play an increasingly important role in today's Internet, they are also capable of manipulating probe packets that traverse their privately owned and operated backbones. As a result, standard traceroute-based measurement techniques are no longer a reliable means for assessing network connectivity in these global-scale cloud provider infrastructures. In response to these developments, we present a new empirical approach for elucidating connectivity in these private backbone networks. Our approach relies on using only lightweight (i.e., simple, easily interpretable, and readily available) measurements, but requires applying heavyweight mathematical techniques for analyzing these measurements. In particular, we describe a new method that uses network latency measurements and relies on concepts from Riemannian geometry (i.e., Ricci curvature) to assess the characteristics of the connectivity fabric of a given network infrastructure. We complement this method with a visualization tool that generates a novel manifold view of a network's delay space. We demonstrate our approach by utilizing latency measurements from available vantage points and virtual machines running in datacenters of three large cloud providers to study different aspects of connectivity in their private backbones and show how our generated manifold views enable us to expose and visualize critical aspects of this connectivity.

     
    more » « less
  4. The main premise of this work is that since large cloud providers can and do manipulate probe packets that traverse their privately owned and operated backbones, standard traceroute-based measurement techniques are no longer a reliable means for assessing network connectivity in large cloud provider infrastructures. In response to these developments, we present a new empirical approach for elucidating private connectivity in today's Internet. Our approach relies on using only "light-weight" ( i.e., simple, easily-interpretable, and readily available) measurements, but requires applying a "heavy-weight" or advanced mathematical analysis. In particular, we describe a new method for assessing the characteristics of network path connectivity that is based on concepts from Riemannian geometry ( i.e., Ricci curvature) and also relies on an array of carefully crafted visualizations ( e.g., a novel manifold view of a network's delay space). We demonstrate our method by utilizing latency measurements from RIPE Atlas anchors and virtual machines running in data centers of three large cloud providers to (i) study different aspects of connectivity in their private backbones and (ii) show how our manifold-based view enables us to expose and visualize critical aspects of this connectivity over different geographic scales. 
    more » « less
  5. While cross entropy (CE) is the most commonly used loss function to train deep neural networks for classification tasks, many alternative losses have been developed to obtain better empirical performance. Among them, which one is the best to use is still a mystery, because there seem to be multiple factors affecting the answer, such as properties of the dataset, the choice of network architecture, and so on. This paper studies the choice of loss function by examining the last-layer features of deep networks, drawing inspiration from a recent line work showing that the global optimal solution of CE and mean-square-error (MSE) losses exhibits a Neural Collapse phenomenon. That is, for sufficiently large networks trained until convergence, (i) all features of the same class collapse to the corresponding class mean and (ii) the means associated with different classes are in a configuration where their pairwise distances are all equal and maximized. We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse. Hence, all relevant losses (i.e., CE, LS, FL, MSE) produce equivalent features on training data. In particular, based on the unconstrained feature model assumption, we provide either the global landscape analysis for LS loss or the local landscape analysis for FL loss and show that the (only!) global minimizers are neural collapse solutions, while all other critical points are strict saddles whose Hessian exhibit negative curvature directions either in the global scope for LS loss or in the local scope for FL loss near the optimal solution. The experiments further show that Neural Collapse features obtained from all relevant losses (i.e., CE, LS, FL, MSE) lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence. 
    more » « less