skip to main content


Search for: All records

Award ID contains: 2019336

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The incorporation of high‐performance optoelectronic devices into photonic neuromorphic processors can substantially accelerate computationally intensive matrix multiplication operations in machine learning (ML) algorithms. However, the conventional designs of individual devices and system are largely disconnected, and the system optimization is limited to the manual exploration of a small design space. Here, a device‐system end‐to‐end design methodology is reported to optimize a free‐space optical general matrix multiplication (GEMM) hardware accelerator by engineering a spatially reconfigurable array made from chalcogenide phase change materials. With a highly parallelized integrated hardware emulator with experimental information, the design of unit device to directly optimize GEMM calculation accuracy is achieved by exploring a large parameter space through reinforcement learning algorithms, including deep Q‐learning neural network, Bayesian optimization, and their cascaded approach. The algorithm‐generated physical quantities show a clear correlation between system performance metrics and device specifications. Furthermore, physics‐aware training approaches are employed to deploy optimized hardware to the tasks of image classification, materials discovery, and a closed‐loop design of optical ML accelerators. The demonstrated framework offers insights into the end‐to‐end and co‐design of optoelectronic devices and systems with reduced human supervision and domain knowledge barriers.

     
    more » « less
  2. Abstract

    Diffractive optical neural networks have shown promising advantages over electronic circuits for accelerating modern machine learning (ML) algorithms. However, it is challenging to achieve fully programmable all‐optical implementation and rapid hardware deployment. Here, a large‐scale, cost‐effective, complex‐valued, and reconfigurable diffractive all‐optical neural networks system in the visible range is demonstrated based on cascaded transmissive twisted nematic liquid crystal spatial light modulators. The employment of categorical reparameterization technique creates a physics‐aware training framework for the fast and accurate deployment of computer‐trained models onto optical hardware. Such a full stack of hardware and software enables not only the experimental demonstration of classifying handwritten digits in standard datasets, but also theoretical analysis and experimental verification of physics‐aware adversarial attacks onto the system, which are generated from a complex‐valued gradient‐based algorithm. The detailed adversarial robustness comparison with conventional multiple layer perceptrons and convolutional neural networks features a distinct statistical adversarial property in diffractive optical neural networks. The developed full stack of software and hardware provides new opportunities of employing diffractive optics in a variety of ML tasks and in the research on optical adversarial ML.

     
    more » « less
  3. null (Ed.)
    Abstract Deep neural networks (DNNs) have substantial computational requirements, which greatly limit their performance in resource-constrained environments. Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for deep learning systems in terms of their power efficiency, parallelism and computational speed. Among them, free-space diffractive deep neural networks (D 2 NNs) based on the light diffraction, feature millions of neurons in each layer interconnected with neurons in neighboring layers. However, due to the challenge of implementing reconfigurability, deploying different DNNs algorithms requires re-building and duplicating the physical diffractive systems, which significantly degrades the hardware efficiency in practical application scenarios. Thus, this work proposes a novel hardware-software co-design method that enables first-of-its-like real-time multi-task learning in D 2 2NNs that automatically recognizes which task is being deployed in real-time. Our experimental results demonstrate significant improvements in versatility, hardware efficiency, and also demonstrate and quantify the robustness of proposed multi-task D 2 NN architecture under wide noise ranges of all system components. In addition, we propose a domain-specific regularization algorithm for training the proposed multi-task architecture, which can be used to flexibly adjust the desired performance for each task. 
    more » « less
  4. null (Ed.)
  5.  
    more » « less