skip to main content


Title: A Superconducting Nanowire-based Architecture for Neuromorphic Computing. , Selected as a Highlight of NCE for 2022.
Neuromorphic computing would benefit from the utilization of improved customized hardware. However, the translation of neuromorphic algorithms to hardware is not easily accomplished. In particular, building superconducting neuromorphic systems requires expertise in both superconducting physics and theoretical neuroscience, which makes such design particularly challenging. In this work, we aim to bridge this gap by presenting a tool and methodology to translate algorithmic parameters into circuit specifications. We first show the correspondence between theoretical neuroscience models and the dynamics of our circuit topologies. We then apply this tool to solve a linear system and implement Boolean logic gates by creating spiking neural networks with our superconducting nanowire-based hardware.  more » « less
Award ID(s):
2139936 2003830
NSF-PAR ID:
10405671
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Neuromorphic computing and engineering
Volume:
2
Issue:
3
ISSN:
2634-4386
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neuromorphic computing would benefit from the utilization of improved customized hardware. However, the translation of neuromorphic algorithms to hardware is not easily accomplished. In particular, building superconducting neuromorphic systems requires expertise in both supercon- ducting physics and theoretical neuroscience, which makes such design particularly challenging. In this work, we aim to bridge this gap by presenting a tool and methodology to translate algorith- mic parameters into circuit specifications. We first show the correspondence between theoretical neuroscience models and the dynamics of our circuit topologies. We then apply this tool to solve a linear system and implement Boolean logic gates by creating spiking neural networks with our superconducting nanowire-based hardware. 
    more » « less
  2. Neuromorphic computing systems execute machine learning tasks designed with spiking neural networks. These systems are embracing non-volatile memory to implement high-density and low-energy synaptic storage. Elevated voltages and currents needed to operate non-volatile memories cause aging of CMOS-based transistors in each neuron and synapse circuit in the hardware, drifting the transistor’s parameters from their nominal values. If these circuits are used continuously for too long, the parameter drifts cannot be reversed, resulting in permanent degradation of circuit performance over time, eventually leading to hardware faults. Aggressive device scaling increases power density and temperature, which further accelerates the aging, challenging the reliable operation of neuromorphic systems. Existing reliability-oriented techniques periodically de-stress all neuron and synapse circuits in the hardware at fixed intervals, assuming worst-case operating conditions, without actually tracking their aging at run-time. To de-stress these circuits, normal operation must be interrupted, which introduces latency in spike generation and propagation, impacting the inter-spike interval and hence, performance (e.g., accuracy). We observe that in contrast to long-term aging, which permanently damages the hardware, short-term aging in scaled CMOS transistors is mostly due to bias temperature instability. The latter is heavily workload-dependent and, more importantly, partially reversible. We propose a new architectural technique to mitigate the aging-related reliability problems in neuromorphic systems by designing an intelligent run-time manager (NCRTM), which dynamically de-stresses neuron and synapse circuits in response to the short-term aging in their CMOS transistors during the execution of machine learning workloads, with the objective of meeting a reliability target. NCRTM de-stresses these circuits only when it is absolutely necessary to do so, otherwise reducing the performance impact by scheduling de-stress operations off the critical path. We evaluate NCRTM with state-of-the-art machine learning workloads on a neuromorphic hardware. Our results demonstrate that NCRTM significantly improves the reliability of neuromorphic hardware, with marginal impact on performance. 
    more » « less
  3. null (Ed.)
    Autonomous flight for large aircraft appears to be within our reach. However, launching autonomous systems for everyday missions still requires an immense interdisciplinary research effort supported by pointed policies and funding. We believe that concerted endeavors in the fields of neuroscience, mathematics, sensor physics, robotics, and computer science are needed to address remaining crucial scientific challenges. In this paper, we argue for a bio-inspired approach to solve autonomous flying challenges, outline the frontier of sensing, data processing, and flight control within a neuromorphic paradigm, and chart directions of research needed to achieve operational capabilities comparable to those we observe in nature. One central problem of neuromorphic computing is learning. In biological systems, learning is achieved by adaptive and relativistic information acquisition characterized by near-continuous information retrieval with variable rates and sparsity. This results in both energy and computational resource savings being an inspiration for autonomous systems. We consider pertinent features of insect, bat and bird flight behavior as examples to address various vital aspects of autonomous flight. Insects exhibit sophisticated flight dynamics with comparatively reduced complexity of the brain. They represent excellent objects for the study of navigation and flight control. Bats and birds enable more complex models of attention and point to the importance of active sensing for conducting more complex missions. The implementation of neuromorphic paradigms for autonomous flight will require fundamental changes in both traditional hardware and software. We provide recommendations for sensor hardware and processing algorithm development to enable energy efficient and computationally effective flight control. 
    more » « less
  4. An emerging use-case of machine learning (ML) is to train a model on a high-performance system and deploy the trained model on energy-constrained embedded systems. Neuromorphic hardware platforms, which operate on principles of the biological brain, can significantly lower the energy overhead of a machine learning inference task, making these platforms an attractive solution for embedded ML systems. We present a design-technology tradeoff analysis to implement such inference tasks on the processing elements (PEs) of a Non-Volatile Memory (NVM)-based neuromorphic hardware. Through detailed circuit-level simulations at scaled process technology nodes, we show the negative impact of technology scaling on the information-processing latency, which impacts the quality-of-service (QoS) of an embedded ML system. At a finer granularity, the latency inside a PE depends on 1) the delay introduced by parasitic components on its current paths, and 2) the varying delay to sense different resistance states of its NVM cells. Based on these two observations, we make the following three contributions. First, on the technology front, we propose an optimization scheme where the NVM resistance state that takes the longest time to sense is set on current paths having the least delay, and vice versa, reducing the average PE latency, which improves the QoS. Second, on the architecture front, we introduce isolation transistors within each PE to partition it into regions that can be individually power-gated, reducing both latency and energy. Finally, on the system-software front, we propose a mechanism to leverage the proposed technological and architectural enhancements when implementing a machine-learning inference task on neuromorphic PEs of the hardware. Evaluations with a recent neuromorphic hardware architecture show that our proposed design-technology co-optimization approach improves both performance and energy efficiency of machine-learning inference tasks without incurring high cost-per-bit. 
    more » « less
  5. While neuromorphic computing architectures based on Spiking Neural Networks (SNNs) are increasingly gaining interest as a pathway toward bio-plausible machine learning, attention is still focused on computational units like the neuron and synapse. Shifting from this neuro-synaptic perspective, this paper attempts to explore the self-repair role of glial cells, in particular, astrocytes. The work investigates stronger correlations with astrocyte computational neuroscience models to develop macro-models with a higher degree of bio-fidelity that accurately captures the dynamic behavior of the self-repair process. Hardware-software co-design analysis reveals that bio-morphic astrocytic regulation has the potential to self-repair hardware realistic faults in neuromorphic hardware systems with significantly better accuracy and repair convergence for unsupervised learning tasks on the MNIST and F-MNIST datasets. Our implementation source code and trained models are available at https://github.com/NeuroCompLab-psu/Astromorphic_Self_Repair.

     
    more » « less