skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Choose your tools carefully: a comparative evaluation of deterministic vs. stochastic and binary vs. analog neuron models for implementing emerging computing paradigms
Neuromorphic computing, commonly understood as a computing approach built upon neurons, synapses, and their dynamics, as opposed to Boolean gates, is gaining large mindshare due to its direct application in solving current and future computing technological problems, such as smart sensing, smart devices, self-hosted and self-contained devices, artificial intelligence (AI) applications, etc. In a largely software-defined implementation of neuromorphic computing, it is possible to throw enormous computational power or optimize models and networks depending on the specific nature of the computational tasks. However, a hardware-based approach needs the identification of well-suited neuronal and synaptic models to obtain high functional and energy efficiency, which is a prime concern in size, weight, and power (SWaP) constrained environments. In this work, we perform a study on the characteristics of hardware neuron models (namely, inference errors, generalizability and robustness, practical implementability, and memory capacity) that have been proposed and demonstrated using a plethora of emerging nano-materials technology-based physical devices, to quantify the performance of such neurons on certain classes of problems that are of great importance in real-time signal processing like tasks in the context of reservoir computing. We find that the answer on which neuron to use for what applications depends on the particulars of the application requirements and constraints themselves, i.e., we need not only a hammer but all sorts of tools in our tool chest for high efficiency and quality neuromorphic computing.  more » « less
Award ID(s):
1939012
PAR ID:
10489533
Author(s) / Creator(s):
; ;
Publisher / Repository:
Frontiers in Nanotechnology
Date Published:
Journal Name:
Frontiers in Nanotechnology
Volume:
5
ISSN:
2673-3013
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Driven by the expanse of Internet of Things (IoT) and Cyber-Physical Systems (CPS), there is an increasing demand to process streams of temporal data on embedded devices with limited energy and power resources. Among all potential solutions, neuromorphic computing with spiking neural networks (SNN) that mimic the behavior of brain, have recently been placed at the forefront. Encoding information into sparse and distributed spike events enables low-power implementations, and the complex spatial temporal dynamics of synapses and neurons enable SNNs to detect temporal pattern. However, most existing hardware SNN implementations use simplified neuron and synapse models ignoring synapse dynamic, which is critical for temporal pattern detection and other applications that require temporal dynamics. To adopt a more realistic synapse model in neuromorphic platform its significant computation overhead must be addressed. In this work, we propose an FPGA-based SNN with biologically realistic neuron and synapse for temporal information processing. An encoding scheme to convert continuous real-valued information into sparse spike events is presented. The event-driven implementation of synapse dynamic model and its hardware design that is optimized to exploit the sparsity are also presented. Finally, we train the SNN on various temporal pattern-learning tasks and evaluate its performance and efficiency as compared to rate-based models and artificial neural networks on different embedded platforms. Experiments show that our work can achieve 10X speed up and 196X gains in energy efficiency compared with GPU. 
    more » « less
  2. Abstract Artificial neuronal devices are critical building blocks of neuromorphic computing systems and currently the subject of intense research motivated by application needs from new computing technology and more realistic brain emulation. Researchers have proposed a range of device concepts that can mimic neuronal dynamics and functions. Although the switching physics and device structures of these artificial neurons are largely different, their behaviors can be described by several neuron models in a more unified manner. In this paper, the reports of artificial neuronal devices based on emerging volatile switching materials are reviewed from the perspective of the demonstrated neuron models, with a focus on the neuronal functions implemented in these devices and the exploitation of these functions for computational and sensing applications. Furthermore, the neuroscience inspirations and engineering methods to enrich the neuronal dynamics that remain to be implemented in artificial neuronal devices and networks toward realizing the full functionalities of biological neurons are discussed. 
    more » « less
  3. While neuromorphic computing architectures based on Spiking Neural Networks (SNNs) are increasingly gaining interest as a pathway toward bio-plausible machine learning, attention is still focused on computational units like the neuron and synapse. Shifting from this neuro-synaptic perspective, this paper attempts to explore the self-repair role of glial cells, in particular, astrocytes. The work investigates stronger correlations with astrocyte computational neuroscience models to develop macro-models with a higher degree of bio-fidelity that accurately captures the dynamic behavior of the self-repair process. Hardware-software co-design analysis reveals that bio-morphic astrocytic regulation has the potential to self-repair hardware realistic faults in neuromorphic hardware systems with significantly better accuracy and repair convergence for unsupervised learning tasks on the MNIST and F-MNIST datasets. Our implementation source code and trained models are available at https://github.com/NeuroCompLab-psu/Astromorphic_Self_Repair. 
    more » « less
  4. Progress in hardware and algorithms for artificial intelligence (AI) has ushered in large machine learning models and various applications impacting our everyday lives. However, today's AI, mainly artificial neural networks, still cannot compete with human brains because of two major issues: the high energy consumption of the hardware running AI models and the lack of ability to generalize knowledge and self-adapt to changes. Neuromorphic systems built upon emerging devices, for instance, memristors, provide a promising path to address these issues. Although innovative memristor devices and circuit designs have been proposed for neuromorphic computing and applied to different proof-of-concept applications, there is still a long way to go to build large-scale low-power memristor-based neuromorphic systems that can bridge the gap between AI and biological brains. This Perspective summarizes the progress and challenges from memristor devices to neuromorphic systems and proposes possible directions for neuromorphic system implementation based on memristive devices. 
    more » « less
  5. The explosion of “big data” applications imposes severe challenges of speed and scalability on traditional computer systems. As the performance of traditional Von Neumann machines is greatly hindered by the increasing performance gap between CPU and memory (“known as the memory wall”), neuromorphic computing systems have gained considerable attention. The biology-plausible computing paradigm carries out computing by emulating the charging/discharging process of neuron and synapse potential. The unique spike domain information encoding enables asynchronous event driven computation and communication, and hence has the potential for very high energy efficiency. This survey reviews computing models and hardware platforms of existing neuromorphic computing systems. Neuron and synapse models are first introduced, followed by the discussion on how they will affect hardware design. Case studies of several representative hardware platforms, including their architecture and software ecosystems, are further presented. Lastly we present several future research directions. 
    more » « less