skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2047176

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The incorporation of high‐performance optoelectronic devices into photonic neuromorphic processors can substantially accelerate computationally intensive matrix multiplication operations in machine learning (ML) algorithms. However, the conventional designs of individual devices and system are largely disconnected, and the system optimization is limited to the manual exploration of a small design space. Here, a device‐system end‐to‐end design methodology is reported to optimize a free‐space optical general matrix multiplication (GEMM) hardware accelerator by engineering a spatially reconfigurable array made from chalcogenide phase change materials. With a highly parallelized integrated hardware emulator with experimental information, the design of unit device to directly optimize GEMM calculation accuracy is achieved by exploring a large parameter space through reinforcement learning algorithms, including deep Q‐learning neural network, Bayesian optimization, and their cascaded approach. The algorithm‐generated physical quantities show a clear correlation between system performance metrics and device specifications. Furthermore, physics‐aware training approaches are employed to deploy optimized hardware to the tasks of image classification, materials discovery, and a closed‐loop design of optical ML accelerators. The demonstrated framework offers insights into the end‐to‐end and co‐design of optoelectronic devices and systems with reduced human supervision and domain knowledge barriers. 
    more » « less
  2. Abstract Diffractive optical neural networks have shown promising advantages over electronic circuits for accelerating modern machine learning (ML) algorithms. However, it is challenging to achieve fully programmable all‐optical implementation and rapid hardware deployment. Here, a large‐scale, cost‐effective, complex‐valued, and reconfigurable diffractive all‐optical neural networks system in the visible range is demonstrated based on cascaded transmissive twisted nematic liquid crystal spatial light modulators. The employment of categorical reparameterization technique creates a physics‐aware training framework for the fast and accurate deployment of computer‐trained models onto optical hardware. Such a full stack of hardware and software enables not only the experimental demonstration of classifying handwritten digits in standard datasets, but also theoretical analysis and experimental verification of physics‐aware adversarial attacks onto the system, which are generated from a complex‐valued gradient‐based algorithm. The detailed adversarial robustness comparison with conventional multiple layer perceptrons and convolutional neural networks features a distinct statistical adversarial property in diffractive optical neural networks. The developed full stack of software and hardware provides new opportunities of employing diffractive optics in a variety of ML tasks and in the research on optical adversarial ML. 
    more » « less