Title: Silicon Photonics for Artificial Intelligence and Neuromorphic Computing
Artificial intelligence and neuromorphic computing driven by neural networks has enabled many applications. Software implementations of neural networks on electronic platforms are limited in speed and energy efficiency. Neuromorphic photonics aims to build processors in which optical hardware mimic neural networks in the brain. more »« less
Prucnal, Paul R.; de Lima, Thomas Ferrieira; Huang, Chaoran; Marquez, Bicky A.; Shastri, Bhavin J.
(, 2020 European Conference on Optical Communications (ECOC))
null
(Ed.)
Artificial intelligence and neuromorphic computing driven by neural networks has enabled many applications. Software implementations of neural networks on electronic platforms are limited in speed and energy efficiency. Neuromorphic photonics aims to build processors in which optical hardware mimic neural networks in the brain.
Marquez, Bicky A.; Filipovich, Matthew J.; Howard, Emma R.; Bangari, Viraj; Guo, Zhimu; Morison, Hugh D.; Ferreira De Lima, Thomas; Tait, Alexander N.; Prucnal, Paul R.; Shastri, Bhavin J.
(, Photoniques)
null
(Ed.)
Artificial intelligence enabled by neural networks has enabled applications in many fields (e.g. medicine, finance, autonomous vehicles). Software implementations of neural networks on conventional computers are limited in speed and energy efficiency. Neuromorphic engineering aims to build processors in which hardware mimic neurons and synapses in brain for distributed and parallel processing. Neuromorphic engineering enabled by silicon photonics can offer subnanosecond latencies, and can extend the domain of artificial intelligence applications to high-performance computing and ultrafast learning. We discuss current progress and challenges on these demonstrations to scale to practical systems for training and inference.
Seekings, J; Chandarana, P; Ardakani, M; Mohammadi, M; Zand, R
(, IEEE)
This paper explores the synergistic potential of neuromorphic and edge computing to create a versatile machine learning (ML) system tailored for processing data captured by dynamic vision sensors. We construct and train hybrid models, blending spiking neural networks (SNNs) and artificial neural networks (ANNs) using PyTorch and Lava frameworks. Our hybrid architecture integrates an SNN for temporal feature extraction and an ANN for classification. We delve into the challenges of deploying such hybrid structures on hardware. Specifically, we deploy individual components on Intel's Neuromorphic Processor Loihi (for SNN) and Jetson Nano (for ANN). We also propose an accumulator circuit to transfer data from the spiking to the non-spiking domain. Furthermore, we conduct comprehensive performance analyses of hybrid SNN-ANN models on a heterogeneous system of neuromorphic and edge AI hardware, evaluating accuracy, latency, power, and energy consumption. Our findings demonstrate that the hybrid spiking networks surpass the baseline ANN model across all metrics and outperform the baseline SNN model in accuracy and latency.
Shrestha, Amar; Fang, Haowen; Wu, Qing; Qiu, Qinru
(, International Conference on Neuromorphic Systems)
Asynchronous event-driven computation and communication using spikes facilitate the realization of spiking neural networks (SNN) to be massively parallel, extremely energy efficient and highly robust on specialized neuromorphic hardware. However, the lack of a unified robust learning algorithm limits the SNN to shallow networks with low accuracies. Artificial neural networks (ANN), however, have the backpropagation algorithm which can utilize gradient descent to train networks which are locally robust universal function approximators. But backpropagation algorithm is neither biologically plausible nor neuromorphic implementation friendly because it requires: 1) separate backward and forward passes, 2) differentiable neurons, 3) high-precision propagated errors, 4) coherent copy of weight matrices at feedforward weights and the backward pass, and 5) non-local weight update. Thus, we propose an approximation of the backpropagation algorithm completely with spiking neurons and extend it to a local weight update rule which resembles a biologically plausible learning rule spike-timing-dependent plasticity (STDP). This will enable error propagation through spiking neurons for a more biologically plausible and neuromorphic implementation friendly backpropagation algorithm for SNNs. We test the proposed algorithm on various traditional and non-traditional benchmarks with competitive results.
Lombo, Andres E.; Lares, Jesus E.; Castellani, Matteo; Chou, Chi-Ning; Lynch, Nancy; Berggren, Karl K.
(, Neuromorphic computing and engineering)
Neuromorphic computing would benefit from the utilization of improved customized hardware. However, the translation of neuromorphic algorithms to hardware is not easily accomplished. In particular, building superconducting neuromorphic systems requires expertise in both superconducting physics and theoretical neuroscience, which makes such design particularly challenging. In this work, we aim to bridge this gap by presenting a tool and methodology to translate algorithmic parameters into circuit specifications. We first show the correspondence between theoretical neuroscience models and the dynamics of our circuit topologies. We then apply this tool to solve a linear system and implement Boolean logic gates by creating spiking neural networks with our superconducting nanowire-based hardware.
Shastri, Bhavin J., de Lima, Thomas Ferreira, Huang, Chaoran, Marquez, Bicky A., Shekhar, Sudip, Chrostowski, Lukas, and Prucnal, Paul R. Silicon Photonics for Artificial Intelligence and Neuromorphic Computing. Retrieved from https://par.nsf.gov/biblio/10295727. in 2021 IEEE Photonics Society Summer Topicals Meeting Series (SUM) . Web. doi:10.1109/SUM48717.2021.9505837.
Shastri, Bhavin J., de Lima, Thomas Ferreira, Huang, Chaoran, Marquez, Bicky A., Shekhar, Sudip, Chrostowski, Lukas, & Prucnal, Paul R. Silicon Photonics for Artificial Intelligence and Neuromorphic Computing. in 2021 IEEE Photonics Society Summer Topicals Meeting Series (SUM), (). Retrieved from https://par.nsf.gov/biblio/10295727. https://doi.org/10.1109/SUM48717.2021.9505837
Shastri, Bhavin J., de Lima, Thomas Ferreira, Huang, Chaoran, Marquez, Bicky A., Shekhar, Sudip, Chrostowski, Lukas, and Prucnal, Paul R.
"Silicon Photonics for Artificial Intelligence and Neuromorphic Computing". in 2021 IEEE Photonics Society Summer Topicals Meeting Series (SUM) (). Country unknown/Code not available. https://doi.org/10.1109/SUM48717.2021.9505837.https://par.nsf.gov/biblio/10295727.
@article{osti_10295727,
place = {Country unknown/Code not available},
title = {Silicon Photonics for Artificial Intelligence and Neuromorphic Computing},
url = {https://par.nsf.gov/biblio/10295727},
DOI = {10.1109/SUM48717.2021.9505837},
abstractNote = {Artificial intelligence and neuromorphic computing driven by neural networks has enabled many applications. Software implementations of neural networks on electronic platforms are limited in speed and energy efficiency. Neuromorphic photonics aims to build processors in which optical hardware mimic neural networks in the brain.},
journal = {in 2021 IEEE Photonics Society Summer Topicals Meeting Series (SUM)},
author = {Shastri, Bhavin J. and de Lima, Thomas Ferreira and Huang, Chaoran and Marquez, Bicky A. and Shekhar, Sudip and Chrostowski, Lukas and Prucnal, Paul R.},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.