Neuromorphic computing, commonly understood as a computing approach built upon neurons, synapses, and their dynamics, as opposed to Boolean gates, is gaining large mindshare due to its direct application in solving current and future computing technological problems, such as smart sensing, smart devices, self-hosted and self-contained devices, artificial intelligence (AI) applications, etc. In a largely software-defined implementation of neuromorphic computing, it is possible to throw enormous computational power or optimize models and networks depending on the specific nature of the computational tasks. However, a hardware-based approach needs the identification of well-suited neuronal and synaptic models to obtain high functional and energy efficiency, which is a prime concern in size, weight, and power (SWaP) constrained environments. In this work, we perform a study on the characteristics of hardware neuron models (namely, inference errors, generalizability and robustness, practical implementability, and memory capacity) that have been proposed and demonstrated using a plethora of emerging nano-materials technology-based physical devices, to quantify the performance of such neurons on certain classes of problems that are of great importance in real-time signal processing like tasks in the context of reservoir computing. We find that the answer on which neuron to use for what applications depends on the particulars of the application requirements and constraints themselves, i.e., we need not only a hammer but all sorts of tools in our tool chest for high efficiency and quality neuromorphic computing.
more »
« less
Artificial Neuronal Devices Based on Emerging Materials: Neuronal Dynamics and Applications
Abstract Artificial neuronal devices are critical building blocks of neuromorphic computing systems and currently the subject of intense research motivated by application needs from new computing technology and more realistic brain emulation. Researchers have proposed a range of device concepts that can mimic neuronal dynamics and functions. Although the switching physics and device structures of these artificial neurons are largely different, their behaviors can be described by several neuron models in a more unified manner. In this paper, the reports of artificial neuronal devices based on emerging volatile switching materials are reviewed from the perspective of the demonstrated neuron models, with a focus on the neuronal functions implemented in these devices and the exploitation of these functions for computational and sensing applications. Furthermore, the neuroscience inspirations and engineering methods to enrich the neuronal dynamics that remain to be implemented in artificial neuronal devices and networks toward realizing the full functionalities of biological neurons are discussed.
more »
« less
- Award ID(s):
- 2240407
- PAR ID:
- 10441924
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Advanced Materials
- ISSN:
- 0935-9648
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The notion that a neuron transmits the same set of neurotransmitters at all of its post-synaptic connections, typically known as Dale's law, is well supported throughout the majority of the brain and is assumed in almost all theoretical studies investigating the mechanisms for computation in neuronal networks. Dale's law has numerous functional implications in fundamental sensory processing and decision-making tasks, and it plays a key role in the current understanding of the structure-function relationship in the brain. However, since exceptions to Dale's law have been discovered for certain neurons and because other biological systems with complex network structure incorporate individual units that send both positive and negative feedback signals, we investigate the functional implications of network model dynamics that violate Dale's law by allowing each neuron to send out both excitatory and inhibitory signals to its neighbors. We show how balanced network dynamics, in which large excitatory and inhibitory inputs are dynamically adjusted such that input fluctuations produce irregular firing events, are theoretically preserved for a single population of neurons violating Dale's law. We further leverage this single-population network model in the context of two competing pools of neurons to demonstrate that effective decision-making dynamics are also produced, agreeing with experimental observations from honeybee dynamics in selecting a food source and artificial neural networks trained in optimal selection. Through direct comparison with the classical two-population balanced neuronal network, we argue that the one-population network demonstrates more robust balanced activity for systems with less computational units, such as honeybee colonies, whereas the two-population network exhibits a more rapid response to temporal variations in network inputs, as required by the brain. We expect this study will shed light on the role of neurons violating Dale's law found in experiment as well as shared design principles across biological systems that perform complex computations.more » « less
-
Abstract Neuronal cell death and subsequent brain dysfunction are hallmarks of aging and neurodegeneration, but how the nearby healthy neurons (bystanders) respond to the death of their neighbors is not fully understood. In theDrosophilalarval neuromuscular system, bystander motor neurons can structurally and functionally compensate for the loss of their neighbors by increasing their terminal bouton number and activity. We term this compensation as cross-neuron plasticity, and in this study, we demonstrate that theDrosophilaengulfment receptor, Draper, and the associated kinase, Shark, are required for cross-neuron plasticity. Overexpression of the Draper-I isoform boosts cross-neuron plasticity, implying that the strength of plasticity correlates with Draper signaling. In addition, we find that functional cross-neuron plasticity can be induced at different developmental stages. Our work uncovers a role for Draper signaling in cross-neuron plasticity and provides insights into how healthy bystander neurons respond to the loss of their neighboring neurons.more » « less
-
There is an increasing need to implement neuromorphic systems that are both energetically and computationally efficient. There is also great interest in using electric elements with memory, memelements, that can implement complex neuronal functions intrinsically. A feature not widely incorporated in neuromorphic systems is history-dependent action potential time adaptation which is widely seen in real cells. Previous theoretical work shows that power-law history dependent spike time adaptation, seen in several brain areas and species, can be modeled with fractional order differential equations. Here, we show that fractional order spiking neurons can be implemented using super-capacitors. The super-capacitors have fractional order derivative and memcapacitive properties. We implemented two circuits, a leaky integrate and fire and a Hodgkin–Huxley. Both circuits show power-law spiking time adaptation and optimal coding properties. The spiking dynamics reproduced previously published computer simulations. However, the fractional order Hodgkin–Huxley circuit showed novel dynamics consistent with criticality. We compared the responses of this circuit to recordings from neurons in the weakly-electric fish that have previously been shown to perform fractional order differentiation of their sensory input. The criticality seen in the circuit was confirmed in spontaneous recordings in the live fish. Furthermore, the circuit also predicted long-lasting stimulation that was also corroborated experimentally. Our work shows that fractional order memcapacitors provide intrinsic memory dependence that could allow implementation of computationally efficient neuromorphic devices. Memcapacitors are static elements that consume less energy than the most widely studied memristors, thus allowing the realization of energetically efficient neuromorphic devices.more » « less
-
Despite the promise of superior efficiency and scalability, real‐world deployment of emerging nanoelectronic platforms for brain‐inspired computing have been limited thus far, primarily because of inter‐device variations and intrinsic non‐idealities. In this work, mitigation of these issues is demonstrated by performing learning directly on practical devices through a hardware‐in‐loop approach, utilizing stochastic neurons based on heavy metal/ferromagnetic spin–orbit torque heterostructures. The probabilistic switching and device‐to‐device variability of the fabricated devices of various sizes is characterized to showcase the effect of device dimension on the neuronal dynamics and its consequent impact on network‐level performance. The efficacy of the hardware‐in‐loop scheme is illustrated in a deep learning scenario achieving equivalent software performance. This work paves the way for future large‐scale implementations of neuromorphic hardware and realization of truly autonomous edge‐intelligent devices.more » « less
An official website of the United States government
