skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Composing recurrent spiking neural networks using locally-recurrent motifs and risk-mitigating architectural optimization
In neural circuits, recurrent connectivity plays a crucial role in network function and stability. However, existing recurrent spiking neural networks (RSNNs) are often constructed by random connections without optimization. While RSNNs can produce rich dynamics that are critical for memory formation and learning, systemic architectural optimization of RSNNs is still an open challenge. We aim to enable systematic design of large RSNNs via a new scalable RSNN architecture and automated architectural optimization. We compose RSNNs based on a layer architecture called Sparsely-Connected Recurrent Motif Layer (SC-ML) that consists of multiple small recurrent motifs wired together by sparse lateral connections. The small size of the motifs and sparse inter-motif connectivity leads to an RSNN architecture scalable to large network sizes. We further propose a method called Hybrid Risk-Mitigating Architectural Search (HRMAS) to systematically optimize the topology of the proposed recurrent motifs and SC-ML layer architecture. HRMAS is an alternating two-step optimization process by which we mitigate the risk of network instability and performance degradation caused by architectural change by introducing a novel biologically-inspired “self-repairing” mechanism through intrinsic plasticity. The intrinsic plasticity is introduced to the second step of each HRMAS iteration and acts as unsupervised fast self-adaptation to structural and synaptic weight modifications introduced by the first step during the RSNN architectural “evolution.” We demonstrate that the proposed automatic architecture optimization leads to significant performance gains over existing manually designed RSNNs: we achieve 96.44% on TI46-Alpha, 94.66% on N-TIDIGITS, 90.28% on DVS-Gesture, and 98.72% on N-MNIST. To the best of the authors' knowledge, this is the first work to perform systematic architecture optimization on RSNNs.  more » « less
Award ID(s):
2310170 1948201
PAR ID:
10538384
Author(s) / Creator(s):
; ;
Publisher / Repository:
Frontiers in Neuroscience
Date Published:
Journal Name:
Frontiers in Neuroscience
Volume:
18
ISSN:
1662-453X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for processing sequential data like audio and text. However, most RSNNs suffer from two problems. First, due to the lack of architectural guidance, random recurrent connectivity is often adopted, which does not guarantee good performance. Second, training of RSNNs is in general challenging, bottlenecking achievable model accuracy. To address these problems, we propose a new type of RSNN, skip-connected self-recurrent SNNs (ScSr-SNNs). Recurrence in ScSr-SNNs is introduced by adding self-recurrent connections to spiking neurons. The SNNs with self-recurrent connections can realize recurrent behaviors similar to those of more complex RSNNs, while the error gradients can be more straightforwardly calculated due to the mostly feedforward nature of the network. The network dynamics is enriched by skip connections between nonadjacent layers. Moreover, we propose a new backpropagation (BP) method, backpropagated intrinsic plasticity (BIP), to boost the performance of ScSr-SNNs further by training intrinsic model parameters. Unlike standard intrinsic plasticity rules that adjust the neuron's intrinsic parameters according to neuronal activity, the proposed BIP method optimizes intrinsic parameters based on the backpropagated error gradient of a well-defined global loss function in addition to synaptic weight training. Based on challenging speech, neuromorphic speech, and neuromorphic image data sets, the proposed ScSr-SNNs can boost performance by up to 2.85% compared with other types of RSNNs trained by state-of-the-art BP methods. 
    more » « less
  2. null (Ed.)
    In biological brains, recurrent connections play a crucial role in cortical computation, modulation of network dynamics, and communication. However, in recurrent spiking neural networks (SNNs), recurrence is mostly constructed by random connections. How excitatory and inhibitory recurrent connections affect network responses and what kinds of connectivity benefit learning performance is still obscure. In this work, we propose a novel recurrent structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. The self-recurrent connection of the excitatory neuron mitigates the information loss caused by the firing-and-resetting mechanism and maintains the long-term neuronal memory. The lateral inhibition from the inhibitory neuron to the corresponding excitatory neuron, on the one hand, adjusts the firing activity of the latter. On the other hand, it plays as a forget gate to clear the memory of the excitatory neuron. Based on speech and image datasets commonly used in neuromorphic computing, RSNNs based on the proposed LISR improve performance significantly by up to 9.26% over feedforward SNNs trained by a state-of-the-art backpropagation method with similar computational costs. 
    more » « less
  3. null (Ed.)
    Many details are known about microcircuitry in visual cortices. For example, neurons have supralinear activation functions, they're either excitatory (E) or inhibitory (I), connection strengths fall off with distance, and the output cells of an area are excitatory. This circuitry is important as it's believed to support core functions such as normalization and surround suppression. Yet, multi-area models of the visual processing stream don't usually include these details. Here, we introduce known-features of recurrent processing into the architecture of a convolutional neural network and observe how connectivity and activity change as a result. We find that certain E-I differences found in data emerge in the models, though the details depend on which architectural elements are included. We also compare the representations learned by these models to data, and perform analyses on the learned weight structures to assess the nature of the neural interactions. 
    more » « less
  4. We describe a sparse coding model of visual cortex that encodes image transformations in an equivariant and hierarchical manner. The model consists of a group-equivariant convolutional layer with internal recurrent connections that implement sparse coding through neural population attractor dynamics, consistent with the architecture of visual cortex. The layers can be stacked hierarchically by introducing recurrent connections between them. The hierarchical structure enables rich bottom-up and top-down information flows, hypothesized to underlie the visual system’s ability for perceptual inference. The model’s equivariant representations are demonstrated on time-varying visual scenes. 
    more » « less
  5. Across animal species, dopamine-operated memory systems comprise anatomically segregated, functionally diverse subsystems. Although individual subsystems could operate independently to support distinct types of memory, the logical interplay between subsystems is expected to enable more complex memory processing by allowing existing memory to influence future learning. Recent comprehensive ultrastructural analysis of theDrosophilamushroom body revealed intricate networks interconnecting the dopamine subsystems—the mushroom body compartments. Here, we review the functions of some of these connections that are beginning to be understood. Memory consolidation is mediated by two different forms of network: A recurrent feedback loop within a compartment maintains sustained dopamine activity required for consolidation, whereas feed-forward connections across compartments allow short-term memory formation in one compartment to open the gate for long-term memory formation in another compartment. Extinction and reversal of aversive memory rely on a similar feed-forward circuit motif that signals omission of punishment as a reward, which triggers plasticity that counteracts the original aversive memory trace. Finally, indirect feed-forward connections from a long-term memory compartment to short-term memory compartments mediate higher-order conditioning. Collectively, these emerging studies indicate that feedback control and hierarchical connectivity allow the dopamine subsystems to work cooperatively to support diverse and complex forms of learning. 
    more » « less