skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Title: Silicon microring synapses enable photonic deep learning beyond 9-bit precision

Deep neural networks (DNNs) consist of layers of neurons interconnected by synaptic weights. A high bit-precision in weights is generally required to guarantee high accuracy in many applications. Minimizing error accumulation between layers is also essential when building large-scale networks. Recent demonstrations of photonic neural networks are limited in bit-precision due to cross talk and the high sensitivity of optical components (e.g., resonators). Here, we experimentally demonstrate a record-high precision of 9 bits with a dithering control scheme for photonic synapses. We then numerically simulated the impact with increased synaptic precision on a wireless signal classification application. This work could help realize the potential of photonic neural networks for many practical, real-world tasks.

 
more » « less
Award ID(s):
2128616
PAR ID:
10369556
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optica
Volume:
9
Issue:
5
ISSN:
2334-2536
Format(s):
Medium: X Size: Article No. 579
Size(s):
Article No. 579
Sponsoring Org:
National Science Foundation
More Like this
  1. Quantized deep neural networks (QDNNs) are attractive due to their much lower memory storage and faster inference speed than their regular full-precision counterparts. To maintain the same performance level especially at low bit-widths, QDNNs must be retrained. Their training involves piece-wise constant activation functions and discrete weights; hence, mathematical challenges arise. We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks. Coarse gradient is generally not a gradient of any function but an artificial ascent direction. The weight update of BCGD goes by coarse gradient correction of a weighted average of the full-precision weights and their quantization (the so-called blending), which yields sufficient descent in the objective value and thus accelerates the training. Our experiments demonstrate that this simple blending technique is very effective for quantization at extremely low bit-width such as binarization. In full quantization of ResNet-18 for ImageNet classification task, BCGD gives 64.36% top-1 accuracy with binary weights across all layers and 4-bit adaptive activation. If the weights in the first and last layers are kept in full precision, this number increases to 65.46%. As theoretical justification, we show convergence analysis of coarse gradient descent for a two-linear-layer neural network model with Gaussian input data and prove that the expected coarse gradient correlates positively with the underlying true gradient. 
    more » « less
  2. The integration of computing with memory is essential for distributed, massively parallel, and adaptive architectures such as neural networks in artificial intelligence (AI). Accelerating AI can be achieved through photonic computing, but it requires nonvolatile photonic memory capable of rapid updates during on-chip training sessions or when new information becomes available during deployment. Phase-change materials (PCMs) are promising for providing compact, nonvolatile optical weighting; however, they face limitations in terms of bit precision, programming speed, and cycling endurance. Here, we propose a novel photonic memory cell that merges nonvolatile photonic weighting using PCMs with high-speed, volatile tuning enabled by an integrated PN junction. Our experiments demonstrate that the same PN modulator, fabricated via a foundry-compatible process, can achieve dual functionality. It supports coarse programmability for setting initial optical weights and facilitates high-speed fine-tuning to adjust these weights dynamically. The result shows a 400-fold increase in volatile tuning speed and a 10,000-fold enhancement in efficiency. This multifunctional photonic memory with volatile and nonvolatile capabilities could significantly advance the performance and versatility of photonic memory cells, providing robust solutions for dynamic computing environments.

     
    more » « less
  3. We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during back-propagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of N weights can be done by an exact formula in O(N log N) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment. 
    more » « less
  4. Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we present a novel optical-domain BNN accelerator, named ROBIN , which intelligently integrates heterogeneous microring resonator optical devices with complementary capabilities to efficiently implement the key functionalities in BNNs. We perform detailed fabrication-process variation analyses at the optical device level, explore efficient corrective tuning for these devices, and integrate circuit-level optimization to counter thermal variations. As a result, our proposed ROBIN architecture possesses the desirable traits of being robust, energy-efficient, low latency, and high throughput, when executing BNN models. Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and many electronic accelerators. Specifically, our energy-efficient ROBIN design exhibits energy-per-bit values that are ∼4 × lower than electronic BNN accelerators and ∼933 × lower than a recently proposed photonic BNN accelerator, while a performance-efficient ROBIN design shows ∼3 × and ∼25 × better performance than electronic and photonic BNN accelerators, respectively. 
    more » « less
  5. In this paper, we present AnalogVNN, a simulation framework built on PyTorch that can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to nine layers and ∼1.7 × 106 parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch. 
    more » « less