skip to main content


Title: Two-dimensional materials-based probabilistic synapses and reconfigurable neurons for measuring inference uncertainty using Bayesian neural networks
Abstract Artificial neural networks have demonstrated superiority over traditional computing architectures in tasks such as pattern classification and learning. However, they do not measure uncertainty in predictions, and hence they can make wrong predictions with high confidence, which can be detrimental for many mission-critical applications. In contrast, Bayesian neural networks (BNNs) naturally include such uncertainty in their model, as the weights are represented by probability distributions (e.g. Gaussian distribution). Here we introduce three-terminal memtransistors based on two-dimensional (2D) materials, which can emulate both probabilistic synapses as well as reconfigurable neurons. The cycle-to-cycle variation in the programming of the 2D memtransistor is exploited to achieve Gaussian random number generator-based synapses, whereas 2D memtransistor based integrated circuits are used to obtain neurons with hyperbolic tangent and sigmoid activation functions. Finally, memtransistor-based synapses and neurons are combined in a crossbar array architecture to realize a BNN accelerator for a data classification task.  more » « less
Award ID(s):
2042154 2039351
NSF-PAR ID:
10388514
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Nature Communications
Volume:
13
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The representation of external stimuli in the form of action potentials or spikes constitutes the basis of energy efficient neural computation that emerging spiking neural networks (SNNs) aspire to imitate. With recent evidence suggesting that information in the brain is more often represented by explicit firing times of the neurons rather than mean firing rates, it is imperative to develop novel hardware that can accelerate sparse and spike‐timing‐based encoding. Here a medium‐scale integrated circuit composed of two cascaded three‐stage inverters and one XOR logic gate fabricated using a total of 21 memtransistors based on photosensitive 2D monolayer MoS2 for spike‐timing‐based encoding of visual information, is introduced. It is shown that different illumination intensities can be encoded into sparse spiking with time‐to‐first‐spike representing the illumination information, that is, higher intensities invoke earlier spikes and vice versa. In addition, non‐volatile and analog programmability in the photoencoder is exploited for adaptive photoencoding that allows expedited spiking under scotopic (low‐light) and deferred spiking under photopic (bright‐light) conditions, respectively. Finally, low energy expenditure of less than 1 µJ by the 2D‐memtransistor‐based photoencoder highlights the benefits of in‐sensor and bioinspired design that can be transformative for the acceleration of SNNs.

     
    more » « less
  2. Abstract

    Memristors for neuromorphic computing have gained prominence over the years for implementing synapses and neurons due to their nano-scale footprint and reduced complexity. Several demonstrations show two-dimensional (2D) materials as a promising platform for the realization of transparent, flexible, ultra-thin memristive synapses. However, unsupervised learning in a spiking neural network (SNN) facilitated by linearity and symmetry in synaptic weight update has not been explored thoroughly using the 2D materials platform. Here, we demonstrate that graphene/MoS2/SiOx/Ni synapses exhibit ideal linearity and symmetry when subjected to identical input pulses, which is essential for their role in online training of neural networks. The linearity in weight update holds for a range of pulse width, amplitude and number of applied pulses. Our work illustrates that the mechanism of switching in MoS2-based synapses is through conductive filaments governed by Poole-Frenkel emission. We demonstrate that the graphene/MoS2/SiOx/Ni synapses, when integrated with a MoS2-based leaky integrate-and-fire neuron, can control the spiking of the neuron efficiently. This work establishes 2D MoS2as a viable platform for all-memristive SNNs.

     
    more » « less
  3. Abstract Bayesian networks (BNs) find widespread application in many real-world probabilistic problems including diagnostics, forecasting, computer vision, etc. The basic computing primitive for BNs is a stochastic bit (s-bit) generator that can control the probability of obtaining ‘1’ in a binary bit-stream. While silicon-based complementary metal-oxide-semiconductor (CMOS) technology can be used for hardware implementation of BNs, the lack of inherent stochasticity makes it area and energy inefficient. On the other hand, memristors and spintronic devices offer inherent stochasticity but lack computing ability beyond simple vector matrix multiplication due to their two-terminal nature and rely on extensive CMOS peripherals for BN implementation, which limits area and energy efficiency. Here, we circumvent these challenges by introducing a hardware platform based on 2D memtransistors. First, we experimentally demonstrate a low-power and compact s-bit generator circuit that exploits cycle-to-cycle fluctuation in the post-programmed conductance state of 2D memtransistors. Next, the s-bit generators are monolithically integrated with 2D memtransistor-based logic gates to implement BNs. Our findings highlight the potential for 2D memtransistor-based integrated circuits for non-von Neumann computing applications. 
    more » « less
  4. Abstract

    Neuromorphic systems, which emulate neural functionalities of a human brain, are considered to be an attractive next‐generation computing approach, with advantages of high energy efficiency and fast computing speed. After these neuromorphic systems are proposed, it is demonstrated that artificial synapses and neurons can mimic neural functions of biological synapses and neurons. However, since the neuromorphic functionalities are highly related to the surface properties of materials, bulk material‐based neuromorphic devices suffer from uncontrollable defects at surfaces and strong scattering caused by dangling bonds. Therefore, 2D materials which have dangling‐bond‐free surfaces and excellent crystallinity have emerged as promising candidates for neuromorphic computing hardware. First, the fundamental synaptic behavior is reviewed, such as synaptic plasticity and learning rule, and requirements of artificial synapses to emulate biological synapses. In addition, an overview of recent advances on 2D materials‐based synaptic devices is summarized by categorizing these into various working principles of artificial synapses. Second, the compulsory behavior and requirements of artificial neurons such as the all‐or‐nothing law and refractory periods to simulate a spike neural network are described, and the implementation of 2D materials‐based artificial neurons to date is reviewed. Finally, future challenges and outlooks of 2D materials‐based neuromorphic devices are discussed.

     
    more » « less
  5. null (Ed.)
    Neuromorphic Computing has become tremendously popular due to its ability to solve certain classes of learning tasks better than traditional von-Neumann computers. Data-intensive classification and pattern recognition problems have been of special interest to Neuromorphic Engineers, as these problems present complex use-cases for Deep Neural Networks (DNNs) which are motivated from the architecture of the human brain, and employ densely connected neurons and synapses organized in a hierarchical manner. However, as these systems become larger in order to handle an increasing amount of data and higher dimensionality of features, the designs often become connectivity constrained. To solve this, the computation is divided into multiple cores/islands, called processing engines (PEs). Today, the communication among these PEs are carried out through a power-hungry network-on-chip (NoC), and hence the optimal distribution of these islands along with energy-efficient compute and communication strategies become extremely important in reducing the overall energy of the neuromorphic computer, which is currently orders of magnitude higher than the biological human brain. In this paper, we extensively analyze the choice of the size of the islands based on mixed-signal neurons/synapses for 3-8 bit-resolution within allowable ranges for system-level classification error, determined by the analog non-idealities (noise and mismatch) in the neurons, and propose strategies involving local and global communication for reduction of the system-level energy consumption. AC-coupled mixed-signal neurons are shown to have 10X lower non-idealities than DC-coupled ones, while the choice of number of islands are shown to be a function of the network, constrained by the analog to digital conversion (or viceversa) power at the interface of the islands. The maximum number of layers in an island is analyzed and a global bus-based sparse connectivity is proposed, which consumes orders of magnitude lower power than the competing powerline communication techniques. 
    more » « less