skip to main content


Search for: All records

Creators/Authors contains: "Shah, Vivswan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Optical processing of information holds great promise for addressing many challenges facing the field of computing. However, integrated photonic processors are typically limited by the physical size of the processing units and the energy consumption of high-speed analog-to-digital conversion. In this paper, we demonstrate an integrated, coherent approach to processing temporally multiplexed optical signals using a modular dot-product unit cell to address these challenges. We use these unit cells to demonstrate multiply-accumulate operations on real- and complex-valued inputs using coherent detection and temporal integration. We then extend this to computing the covariance between stochastic bit streams, which can be used to estimate correlation between data streams in the optical domain. Finally, we demonstrate a path to scaling up our platform to enable general matrix-matrix operations. Our approach has the potential to enable highly efficient and scalable optical computing on-chip for a broad variety of AI applications.

     
    more » « less
  2. In this paper, we present AnalogVNN, a simulation framework built on PyTorch that can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to nine layers and ∼1.7 × 106 parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024