skip to main content


Title: Analyzing Emergence in Biological Neural Networks Using Graph Signal Processing
Biological neural networks offer some of the most striking and complex examples of emergence ever observed in natural or man-made systems. Individually, the behavior of a single neuron is rather simple, yet these basic building blocks are connected through synapses to form neural networks, which are capable of sophisticated capabilities such as pattern recognition and navigation. Lower-level functionality provided by a given network is combined with other networks to produce more sophisticated capabilities. These capabilities manifest emergently at two vastly different, yet interconnected time scales. At the time scale of neural dynamics, neural networks are responsible for turning noisy external stimuli and internal signals into signals capable of supporting complex computations. A key component in this process is the structure of the network, which itself forms emergently over much longer time scales based on the outputs of its constituent neurons, a process called learning. The analysis and interpretation of the behaviors of these interconnected dynamical systems of neurons should account for the network structure and the collective behavior of the network. The field of graph signal processing (GSP) combines signal processing with network science to study signals defined on irregular network structures. Here, we show that GSP can be a valuable tool in the analysis of emergence in biological neural networks. Beyond any purely scientific pursuits, understanding the emergence in biological neural networks directly impacts the design of more effective artificial neural networks for general machine learning and artificial intelligence tasks across domains, and motivates additional design motifs for novel emergent systems of systems.  more » « less
Award ID(s):
1835279
NSF-PAR ID:
10348181
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Rainey, Larry B.; Holland, O. Thomas
Date Published:
Journal Name:
Emergent Behavior in System of Systems Engineering
Page Range / eLocation ID:
171 - 192
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neurons in real brains are complex computational units, capable of input-specific damping, inter-trial memory, and context-dependent signal processing. Artificial neurons, on the other hand, are usually implemented as simple weighted sums. Here we explore if increasing the computational power of individual neurons can yield more powerful neural networks. Specifically, we introduce Deep Artificial Neurons (DANs)—small neural networks with shared, learnable parameters embedded within a larger network. DANs act as filters between nodes in the net-work; namely, they receive vectorized inputs from multiple neurons in the previous layer, condense these signals into a single output, then send this processed signal to the neurons in the subsequent layer. We demonstrate that it is possible to meta-learn shared parameters for the various DANS in the network in order to facilitate continual and transfer learning during deployment. Specifically, we present experimental results on (1) incremental non-linear regression tasks and (2)unsupervised class-incremental image reconstruction that show that DANs allow a single network to update its synapses (i.e., regular weights) over time with minimal forgetting. Notably, our approach uses standard backpropagation, does not require experience replay, and does need separate wake/sleep phases. 
    more » « less
  2. The notion that a neuron transmits the same set of neurotransmitters at all of its post-synaptic connections, typically known as Dale's law, is well supported throughout the majority of the brain and is assumed in almost all theoretical studies investigating the mechanisms for computation in neuronal networks. Dale's law has numerous functional implications in fundamental sensory processing and decision-making tasks, and it plays a key role in the current understanding of the structure-function relationship in the brain. However, since exceptions to Dale's law have been discovered for certain neurons and because other biological systems with complex network structure incorporate individual units that send both positive and negative feedback signals, we investigate the functional implications of network model dynamics that violate Dale's law by allowing each neuron to send out both excitatory and inhibitory signals to its neighbors. We show how balanced network dynamics, in which large excitatory and inhibitory inputs are dynamically adjusted such that input fluctuations produce irregular firing events, are theoretically preserved for a single population of neurons violating Dale's law. We further leverage this single-population network model in the context of two competing pools of neurons to demonstrate that effective decision-making dynamics are also produced, agreeing with experimental observations from honeybee dynamics in selecting a food source and artificial neural networks trained in optimal selection. Through direct comparison with the classical two-population balanced neuronal network, we argue that the one-population network demonstrates more robust balanced activity for systems with less computational units, such as honeybee colonies, whereas the two-population network exhibits a more rapid response to temporal variations in network inputs, as required by the brain. We expect this study will shed light on the role of neurons violating Dale's law found in experiment as well as shared design principles across biological systems that perform complex computations. 
    more » « less
  3. A worm called Caenorhabditis elegans has a nervous system made up of only 302 neurons, far fewer than the billions of cells that comprise our own brains. And yet these few hundred neurons are enough for these worms to detect and respond to their surroundings. C. elegans is thus a popular choice for studying how nervous systems process sensory information and use it to control behavior. Yet, most experiments to date have used only simple stimuli, such as taps or pokes, and studied a handful of behaviors, such as whether or not a worm stops moving or backs up. This limits the conclusions it has been possible to draw. Liu et al. therefore set out to determine how the worm’s nervous system responds to more complex stimuli. These included physical stimuli, such as taps on the side of the dish containing the worms, as well as simulated stimuli. To generate the latter, Liu et al. used a technique called optogenetics to directly activate the neurons in the worm’s body that would normally detect information from the senses, by simply shining a light on the worms. Doing so gives the worm the sensation of a physical stimulus, even though none was present. Liu et al. then used mathematics to examine the relationships between the stimuli and the worms’ responses. The results confirmed that worms usually respond to simple stimuli, such as taps on the side of their dish, by backing up. But they also revealed more advanced forms of stimulus processing. The worms responded differently to stimuli that increased over time versus decreased, for example. A worm's response to a stimulus also varied depending on what the worm was doing at the time. Worms that were in the middle of turns, for instance, ignored stimuli to which they would normally respond. This suggests that an animal’s current behavior influences how its nervous system interprets sensory information. The discovery of relatively sophisticated responses to sensory stimuli in C. elegans indicates that even simple nervous systems are capable of flexible sensory processing. This lays a foundation for understanding how neural circuits interpret sensory signals. Building on this work will ultimately help us understand how more complicated nervous systems interpret and respond to the world. 
    more » « less
  4. INTRODUCTION A brainwide, synaptic-resolution connectivity map—a connectome—is essential for understanding how the brain generates behavior. However because of technological constraints imaging entire brains with electron microscopy (EM) and reconstructing circuits from such datasets has been challenging. To date, complete connectomes have been mapped for only three organisms, each with several hundred brain neurons: the nematode C. elegans , the larva of the sea squirt Ciona intestinalis , and of the marine annelid Platynereis dumerilii . Synapse-resolution circuit diagrams of larger brains, such as insects, fish, and mammals, have been approached by considering select subregions in isolation. However, neural computations span spatially dispersed but interconnected brain regions, and understanding any one computation requires the complete brain connectome with all its inputs and outputs. RATIONALE We therefore generated a connectome of an entire brain of a small insect, the larva of the fruit fly, Drosophila melanogaster. This animal displays a rich behavioral repertoire, including learning, value computation, and action selection, and shares homologous brain structures with adult Drosophila and larger insects. Powerful genetic tools are available for selective manipulation or recording of individual neuron types. In this tractable model system, hypotheses about the functional roles of specific neurons and circuit motifs revealed by the connectome can therefore be readily tested. RESULTS The complete synaptic-resolution connectome of the Drosophila larval brain comprises 3016 neurons and 548,000 synapses. We performed a detailed analysis of the brain circuit architecture, including connection and neuron types, network hubs, and circuit motifs. Most of the brain’s in-out hubs (73%) were postsynaptic to the learning center or presynaptic to the dopaminergic neurons that drive learning. We used graph spectral embedding to hierarchically cluster neurons based on synaptic connectivity into 93 neuron types, which were internally consistent based on other features, such as morphology and function. We developed an algorithm to track brainwide signal propagation across polysynaptic pathways and analyzed feedforward (from sensory to output) and feedback pathways, multisensory integration, and cross-hemisphere interactions. We found extensive multisensory integration throughout the brain and multiple interconnected pathways of varying depths from sensory neurons to output neurons forming a distributed processing network. The brain had a highly recurrent architecture, with 41% of neurons receiving long-range recurrent input. However, recurrence was not evenly distributed and was especially high in areas implicated in learning and action selection. Dopaminergic neurons that drive learning are amongst the most recurrent neurons in the brain. Many contralateral neurons, which projected across brain hemispheres, were in-out hubs and synapsed onto each other, facilitating extensive interhemispheric communication. We also analyzed interactions between the brain and nerve cord. We found that descending neurons targeted a small fraction of premotor elements that could play important roles in switching between locomotor states. A subset of descending neurons targeted low-order post-sensory interneurons likely modulating sensory processing. CONCLUSION The complete brain connectome of the Drosophila larva will be a lasting reference study, providing a basis for a multitude of theoretical and experimental studies of brain function. The approach and computational tools generated in this study will facilitate the analysis of future connectomes. Although the details of brain organization differ across the animal kingdom, many circuit architectures are conserved. As more brain connectomes of other organisms are mapped in the future, comparisons between them will reveal both common and therefore potentially optimal circuit architectures, as well as the idiosyncratic ones that underlie behavioral differences between organisms. Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations. Such features could therefore increase the brain’s computational capacity, overcoming physiological constraints on the number of neurons. Future analysis of similarities and differences between brains and artificial neural networks may help in understanding brain computational principles and perhaps inspire new machine learning architectures. The connectome of the Drosophila larval brain. The morphologies of all brain neurons, reconstructed from a synapse-resolution EM volume, and the synaptic connectivity matrix of an entire brain. This connectivity information was used to hierarchically cluster all brains into 93 cell types, which were internally consistent based on morphology and known function. 
    more » « less
  5. null (Ed.)
    Abstract Understanding the mechanisms by which neurons create or suppress connections to enable communication in brain-derived neuronal cultures can inform how learning, cognition and creative behavior emerge. While prior studies have shown that neuronal cultures possess self-organizing criticality properties, we further demonstrate that in vitro brain-derived neuronal cultures exhibit a self-optimization phenomenon. More precisely, we analyze the multiscale neural growth data obtained from label-free quantitative microscopic imaging experiments and reconstruct the in vitro neuronal culture networks (microscale) and neuronal culture cluster networks (mesoscale). We investigate the structure and evolution of neuronal culture networks and neuronal culture cluster networks by estimating the importance of each network node and their information flow. By analyzing the degree-, closeness-, and betweenness-centrality, the node-to-node degree distribution (informing on neuronal interconnection phenomena), the clustering coefficient/transitivity (assessing the “small-world” properties), and the multifractal spectrum, we demonstrate that murine neurons exhibit self-optimizing behavior over time with topological characteristics distinct from existing complex network models. The time-evolving interconnection among murine neurons optimizes the network information flow, network robustness, and self-organization degree. These findings have complex implications for modeling neuronal cultures and potentially on how to design biological inspired artificial intelligence. 
    more » « less