skip to main content


Title: Limits of multifunctionality in tunable networks

Nature is rife with networks that are functionally optimized to propagate inputs to perform specific tasks. Whether via genetic evolution or dynamic adaptation, many networks create functionality by locally tuning interactions between nodes. Here we explore this behavior in two contexts: strain propagation in mechanical networks and pressure redistribution in flow networks. By adding and removing links, we are able to optimize both types of networks to perform specific functions. We define a single function as a tuned response of a single “target” link when another, predetermined part of the network is activated. Using network structures generated via such optimization, we investigate how many simultaneous functions such networks can be programed to fulfill. We find that both flow and mechanical networks display qualitatively similar phase transitions in the number of targets that can be tuned, along with the same robust finite-size scaling behavior. We discuss how these properties can be understood in the context of constraint–satisfaction problems.

 
more » « less
NSF-PAR ID:
10083923
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Proceedings of the National Academy of Sciences
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
116
Issue:
7
ISSN:
0027-8424
Page Range / eLocation ID:
p. 2506-2511
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The field of basal cognition seeks to understand how adaptive, context-specific behavior occurs in non-neural biological systems. Embryogenesis and regeneration require plasticity in many tissue types to achieve structural and functional goals in diverse circumstances. Thus, advances in both evolutionary cell biology and regenerative medicine require an understanding of how non-neural tissues could process information. Neurons evolved from ancient cell types that used bioelectric signaling to perform computation. However, it has not been shown whether or how non-neural bioelectric cell networks can support computation. We generalize connectionist methods to non-neural tissue architectures, showing that a minimal non-neural Bio-Electric Network (BEN) model that utilizes the general principles of bioelectricity (electrodiffusion and gating) can compute. We characterize BEN behaviors ranging from elementary logic gates to pattern detectors, using both fixed and transient inputs to recapitulate various biological scenarios. We characterize the mechanisms of such networks using dynamical-systems and information-theory tools, demonstrating that logic can manifest in bidirectional, continuous, and relatively slow bioelectrical systems, complementing conventional neural-centric architectures. Our results reveal a variety of non-neural decision-making processes as manifestations of general cellular biophysical mechanisms and suggest novel bioengineering approaches to construct functional tissues for regenerative medicine and synthetic biology as well as new machine learning architectures.

     
    more » « less
  2. Recent progress on stretchable, tough dual-dynamic polymer single networks (SN) and interpenetrated networks (IPN) has broadened the potential applications of dynamic polymers. However, the impact of macromolecular structure on the material mechanics remains poorly understood. Here, rapidly exchanging hydrogen bonds and thermoresponsive Diels–Alder bonds were included into molecularly engineered interpenetrated network materials. RAFT polymerization was used to make well-defined polymers with control over macromolecular architecture. The IPN materials were assessed by gel permeation chromatography, differential scanning calorimetry, tensile testing and rheology. The mechanical properties of these IPN materials can be tuned by varying the crosslinker content and chain length. All materials are elastic and have dynamic behavior at both ambient temperature and elevated temperature (90 °C), owing to the presence of the dual dynamic noncovalent and covalent bonds. 100% self-healing recovery was achieved and a maximum stress level of up to 6 MPa was obtained. The data suggested the material's healing properties are inversely proportional to the content of the crosslinker or the degree of polymerization at both room and elevated temperature. The thermoresponsive crosslinker restricted deformation to some extent in an ambient environment but gave excellent malleability upon heating. The underlying mechanism was explored by the computational simulations. Furthermore, a single network material with the same crosslinker content and degree of polymerization as the IPN was made. The SN was substantially weaker than the comparable IPN material. 
    more » « less
  3. Neuronal activity propagates through the network during seizures, engaging brain dynamics at multiple scales. Such propagating events can be described through the avalanches framework, which can relate spatiotemporal activity at the microscale with global network properties. Interestingly, propagating avalanches in healthy networks are indicative of critical dynamics, where the network is organized to a phase transition, which optimizes certain computational properties. Some have hypothesized that the pathologic brain dynamics of epileptic seizures are an emergent property of microscale neuronal networks collectively driving the brain away from criticality. Demonstrating this would provide a unifying mechanism linking microscale spatiotemporal activity with emergent brain dysfunction during seizures. Here, we investigated the effect of drug-induced seizures on critical avalanche dynamics, usingin vivowhole-brain two-photon imaging of GCaMP6s larval zebrafish (males and females) at single neuron resolution. We demonstrate that single neuron activity across the whole brain exhibits a loss of critical statistics during seizures, suggesting that microscale activity collectively drives macroscale dynamics away from criticality. We also construct spiking network models at the scale of the larval zebrafish brain, to demonstrate that only densely connected networks can drive brain-wide seizure dynamics away from criticality. Importantly, such dense networks also disrupt the optimal computational capacities of critical networks, leading to chaotic dynamics, impaired network response properties and sticky states, thus helping to explain functional impairments during seizures. This study bridges the gap between microscale neuronal activity and emergent macroscale dynamics and cognitive dysfunction during seizures.

    SIGNIFICANCE STATEMENTEpileptic seizures are debilitating and impair normal brain function. It is unclear how the coordinated behavior of neurons collectively impairs brain function during seizures. To investigate this we perform fluorescence microscopy in larval zebrafish, which allows for the recording of whole-brain activity at single-neuron resolution. Using techniques from physics, we show that neuronal activity during seizures drives the brain away from criticality, a regime that enables both high and low activity states, into an inflexible regime that drives high activity states. Importantly, this change is caused by more connections in the network, which we show disrupts the ability of the brain to respond appropriately to its environment. Therefore, we identify key neuronal network mechanisms driving seizures and concurrent cognitive dysfunction.

     
    more » « less
  4. There are now many examples of single molecule rotors, motors, and switches in the literature that, when driven by photons, electrons, or chemical reactions, exhibit well-defined motions. As a step toward using these single molecule devices to perform useful functions, one must understand how they interact with their environment and quantify their ability to perform work on it. Using a single molecule rotary switch, we examine the transfer of electrical energy, delivered via electron tunneling, to mechanical motion and measure the forces the switch experiences with a noncontact q-plus atomic force microscope. Action spectra reveal that the molecular switch has two stable states and can be excited resonantly between them at a bias of 100 mV via a one-electron inelastic tunneling process which corresponds to an energy input of 16 zJ. While the electrically induced switching events are stochastic and no net work is done on the cantilever, by measuring the forces between the molecular switch and the AFM cantilever, we can derive the maximum hypothetical work the switch could perform during a single switching event, which is ∼55 meV, equal to 8.9 zJ, which translates to a hypothetical efficiency of ∼55% per individual inelastic tunneling electron-induced switching event. When considering the total electrical energy input, this drops to 1 × 10–7% due to elastic tunneling events that dominate the tunneling current. However, this approach constitutes a general method for quantifying and comparing the energy input and output of molecular-mechanical devices. 
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less