skip to main content

Search for: All records

Creators/Authors contains: "Wright, Logan G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Deep learning has become a widespread tool in both science and industry. However, continued progress is hampered by the rapid growth in energy costs of ever-larger deep neural networks. Optical neural networks provide a potential means to solve the energy-cost problem faced by deep learning. Here, we experimentally demonstrate an optical neural network based on optical dot products that achieves 99% accuracy on handwritten-digit classification using ~3.1 detected photons per weight multiplication and ~90% accuracy using ~0.66 photons (~2.5 × 10 −19  J of optical energy) per weight multiplication. The fundamental principle enabling our sub-photon-per-multiplication demonstration—noise reduction from the accumulation ofmore »scalar multiplications in dot-product sums—is applicable to many different optical-neural-network architectures. Our work shows that optical neural networks can achieve accurate results using extremely low optical energies.« less
    Free, publicly-accessible full text available December 1, 2023
  2. The overall goal of photonics research is to understand and control light in new and richer ways to facilitate new and richer applications. Many major developments to this end have relied on nonlinear optical techniques, such as lasing, mode-locking, and parametric downconversion, to enable applications based on the interactions of coherent light with matter. These processes often involve nonlinear interactions between photonic and material degrees of freedom spanning multiple spatiotemporal scales. While great progress has been made with relatively simple optimizations, such as maximizing single-mode coherence or peak intensity alone, the ultimate achievement of coherent light engineering is complete, multidimensionalmore »control of light–light and light–matter interactions through tailored construction of complex optical fields and systems that exploit all of light’s degrees of freedom. This capability is now within sight, due to advances in telecommunications, computing, algorithms, and modeling. Control of highly multimode optical fields and processes also facilitates quantitative and qualitative advances in optical imaging, sensing, communication, and information processing since these applications directly depend on our ability to detect, encode, and manipulate information in as many optical degrees of freedom as possible. Today, these applications are increasingly being enhanced or enabled by both multimode engineering and nonlinearity. Here, we provide a brief overview of multimode nonlinear photonics, focusing primarily on spatiotemporal nonlinear wave propagation and, in particular, on promising future directions and routes to applications. We conclude with an overview of emerging processes and methodologies that will enable complex, coherent nonlinear photonic devices with many degrees of freedom.« less
    Free, publicly-accessible full text available January 1, 2023
  3. Abstract Deep-learning models have become pervasive tools in science and engineering. However, their energy requirements now increasingly limit their scalability 1 . Deep-learning accelerators 2–9 aim to perform deep learning energy-efficiently, usually targeting the inference phase and often by exploiting physical substrates beyond conventional electronics. Approaches so far 10–22 have been unable to apply the backpropagation algorithm to train unconventional novel hardware in situ. The advantages of backpropagation have made it the de facto training method for large-scale neural networks, so this deficiency constitutes a major impediment. Here we introduce a hybrid in situ–in silico algorithm, called physics-aware training, that applies backpropagation tomore »train controllable physical systems. Just as deep learning realizes computations with deep neural networks made from layers of mathematical functions, our approach allows us to train deep physical neural networks made from layers of controllable physical systems, even when the physical layers lack any mathematical isomorphism to conventional artificial neural network layers. To demonstrate the universality of our approach, we train diverse physical neural networks based on optics, mechanics and electronics to experimentally perform audio and image classification tasks. Physics-aware training combines the scalability of backpropagation with the automatic mitigation of imperfections and noise achievable with in situ algorithms. Physical neural networks have the potential to perform machine learning faster and more energy-efficiently than conventional electronic processors and, more broadly, can endow physical systems with automatically designed physical functionalities, for example, for robotics 23–26 , materials 27–29 and smart sensors 30–32 .« less
    Free, publicly-accessible full text available January 27, 2023
  4. Free, publicly-accessible full text available January 1, 2023