skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Experimental Demonstration of Ferroelectric Spiking Neurons for Unsupervised Clustering
We report the first experimental demonstration of ferroelectric field-effect transistor (FEFET) based spiking neurons. A unique feature of the ferroelectric (FE) neuron demonstrated herein is the availability of both excitatory and inhibitory input connections in the compact 1T-1FEFET structure, which is also reported for the first time for any neuron implementations. Such dual neuron functionality is a key requirement for bio-mimetic neural networks and represents a breakthrough for implementation of the third generation spiking neural networks (SNNs)-also reported herein for unsupervised learning and clustering on real world data for the first time. The key to our demonstration is the careful design of two important device level features: (1) abrupt hysteretic transitions of the FEFET with no stable states therein, and (2) the dynamic tunability of the FEFET hysteresis by bias conditions which allows for the inhibition functionality. Experimentally calibrated, multi-domain Preisach based FEFET models were used to accurately simulate the FE neurons and project their performance at scaled nodes. We also implement an SNN for unsupervised clustering and benchmark the network performance across analog CMOS and emerging technologies and observe (1) unification of excitatory and inhibitory neural connections, (2) STDP based learning, (3) lowest reported power (3.6nW) during classification, and (4) a classification accuracy of 93%.  more » « less
Award ID(s):
1708615 1718671
PAR ID:
10089025
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
2018 IEEE International Electron Devices Meeting (IEDM)
Page Range / eLocation ID:
13.3.1 to 13.3.4
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. We first consider a deterministic implementation of a neural timer and show that Θ(logt) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(loglog1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{loglog1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. 
    more » « less
  2. null (Ed.)
    In biological brains, recurrent connections play a crucial role in cortical computation, modulation of network dynamics, and communication. However, in recurrent spiking neural networks (SNNs), recurrence is mostly constructed by random connections. How excitatory and inhibitory recurrent connections affect network responses and what kinds of connectivity benefit learning performance is still obscure. In this work, we propose a novel recurrent structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. The self-recurrent connection of the excitatory neuron mitigates the information loss caused by the firing-and-resetting mechanism and maintains the long-term neuronal memory. The lateral inhibition from the inhibitory neuron to the corresponding excitatory neuron, on the one hand, adjusts the firing activity of the latter. On the other hand, it plays as a forget gate to clear the memory of the excitatory neuron. Based on speech and image datasets commonly used in neuromorphic computing, RSNNs based on the proposed LISR improve performance significantly by up to 9.26% over feedforward SNNs trained by a state-of-the-art backpropagation method with similar computational costs. 
    more » « less
  3. We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuroscience. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds.We first consider a deterministic implementation of a neural timer and show that Θ(logt)(deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2 t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(log log 1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{log log 1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks.Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS’90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time. 
    more » « less
  4. A 40nm, 2.56mm2 , 2048-neuron globally asynchronous locally synchronous (GALS) spiking neural network (SNN) chip is presented. For scalability, we allow neurons to specialize to excitatory or inhibitory, and apply distance-based pruning to cut communication and memory. An asynchronous router limits the latency to 1.32ns per hop. The reduced traffic and lower latency allow the input channel to be parallelized to achieve 7.85GSOP/s at 0.7V, consuming 5.9pJ/SOP. 
    more » « less
  5. The notion that a neuron transmits the same set of neurotransmitters at all of its post-synaptic connections, typically known as Dale's law, is well supported throughout the majority of the brain and is assumed in almost all theoretical studies investigating the mechanisms for computation in neuronal networks. Dale's law has numerous functional implications in fundamental sensory processing and decision-making tasks, and it plays a key role in the current understanding of the structure-function relationship in the brain. However, since exceptions to Dale's law have been discovered for certain neurons and because other biological systems with complex network structure incorporate individual units that send both positive and negative feedback signals, we investigate the functional implications of network model dynamics that violate Dale's law by allowing each neuron to send out both excitatory and inhibitory signals to its neighbors. We show how balanced network dynamics, in which large excitatory and inhibitory inputs are dynamically adjusted such that input fluctuations produce irregular firing events, are theoretically preserved for a single population of neurons violating Dale's law. We further leverage this single-population network model in the context of two competing pools of neurons to demonstrate that effective decision-making dynamics are also produced, agreeing with experimental observations from honeybee dynamics in selecting a food source and artificial neural networks trained in optimal selection. Through direct comparison with the classical two-population balanced neuronal network, we argue that the one-population network demonstrates more robust balanced activity for systems with less computational units, such as honeybee colonies, whereas the two-population network exhibits a more rapid response to temporal variations in network inputs, as required by the brain. We expect this study will shed light on the role of neurons violating Dale's law found in experiment as well as shared design principles across biological systems that perform complex computations. 
    more » « less