skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Neural computing with coherent laser networks
Abstract We show that coherent laser networks (CLNs) exhibit emergent neural computing capabilities. The proposed scheme is built on harnessing the collective behavior of laser networks for storing a number of phase patterns as stable fixed points of the governing dynamical equations and retrieving such patterns through proper excitation conditions, thus exhibiting an associative memory property. It is discussed that despite the large storage capacity of the network, the large overlap between fixed-point patterns effectively limits pattern retrieval to only two images. Next, we show that this restriction can be uplifted by using nonreciprocal coupling between lasers and this allows for utilizing a large storage capacity. This work opens new possibilities for neural computation with coherent laser networks as novel analog processors. In addition, the underlying dynamical model discussed here suggests a novel energy-based recurrent neural network that handles continuous data as opposed to Hopfield networks and Boltzmann machines that are intrinsically binary systems.  more » « less
Award ID(s):
2112550
PAR ID:
10422692
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Nanophotonics
Volume:
12
Issue:
5
ISSN:
2192-8614
Page Range / eLocation ID:
883 to 892
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices. 
    more » « less
  2. We introduce a novel approach to endowing neural networks with emergent, long-term, large-scale memory. Distinct from strategies that connect neural networks to external memory banks via intricately crafted controllers and hand-designed attentional mechanisms, our memory is internal, distributed, co-located alongside computation, and implicitly addressed, while being drastically simpler than prior efforts. Architecting networks with multigrid structure and connectivity, while distributing memory cells alongside computation throughout this topology, we observe the emergence of coherent memory subsystems. Our hierarchical spatial organization, parameterized convolutionally, permits efficient instantiation of large-capacity memories, while multigrid topology provides short internal routing pathways, allowing convolutional networks to efficiently approximate the behavior of fully connected networks. Such networks have an implicit capacity for internal attention; augmented with memory, they learn to read and write specific memory locations in a dynamic data-dependent manner. We demonstrate these capabilities on exploration and mapping tasks, where our network is able to self-organize and retain long-term memory for trajectories of thousands of time steps. On tasks decoupled from any notion of spatial geometry: sorting, associative recall, and question answering, our design functions as a truly generic memory and yields excellent results. 
    more » « less
  3. The ACAS X family of aircraft collision avoidance systems uses large numeric lookup tables to make decisions. Recent work used a deep neural network to approximate and compress a collision avoidance table, and simulations showed that the neural network performance was comparable to the original table. Consequently, neural network representations are being explored for use on small aircraft with limited storage capacity. However, the black-box nature of deep neural networks raises safety concerns because simulation results are not exhaustive. This work takes steps towards addressing these concerns by applying formal methods to analyze the behavior of collision avoidance neural networks both in isolation and in a closed-loop system. We evaluate our approach on a specific set of collision avoidance networks and show that even though the networks are not always locally robust, their closed-loop behavior ensures that they will not reach an unsafe (collision) state. 
    more » « less
  4. Abstract As computing resource demands continue to escalate in the face of big data, cloud-connectivity and the internet of things, it has become imperative to develop new low-power, scalable architectures. Neuromorphic photonics, or photonic neural networks, have become a feasible solution for the physical implementation of efficient algorithms directly on-chip. This application is primarily due to the linear nature of light and the scalability of silicon photonics, specifically leveraging the wide-scale complementary metal-oxide-semiconductor manufacturing infrastructure used to fabricate microelectronics chips. Current neuromorphic photonic implementations stem from two paradigms: wavelength coherent and incoherent. Here, we introduce a novel architecture that supports coherentandincoherent operation to increase the capability and capacity of photonic neural networks with a dramatic reduction in footprint compared to previous demonstrations. As a proof-of-principle, we experimentally demonstrate simple addition and subtraction operations on a foundry-fabricated silicon photonic chip. Additionally, we experimentally validate an on-chip network to predict the logical 2 bit gates AND, OR, and XOR to accuracies of 96.8%, 99%, and 98.5%, respectively. This architecture is compatible with highly wavelength parallel sources, enabling massively scalable photonic neural networks. 
    more » « less
  5. ABSTRACT Spontaneous neural activity coherently relays information across the brain. Several efforts have been made to understand how spontaneous neural activity evolves at the macro‐scale level as measured by resting‐state functional magnetic resonance imaging (rsfMRI). Previous studies observe the global patterns and flow of information in rsfMRI using methods such as sliding window or temporal lags. However, to our knowledge, no studies have examined spatial propagation patterns evolving with time across multiple overlapping 4D networks. Here, we propose a novel approach to study how dynamic states of the brain networks spatially propagate and evaluate whether these propagating states contain information relevant to mental illness. We implement a lagged windowed correlation approach to capture voxel‐wise network‐specific spatial propagation patterns in dynamic states. Results show systematic spatial state changes over time, which we confirmed are replicable across multiple scan sessions using human connectome project data. We observe networks varying in propagation speed; for example, the default mode network (DMN) propagates slowly and remains positively correlated with blood oxygenation level‐dependent (BOLD) signal for 6–8 s, whereas the visual network propagates much quicker. We also show that summaries of network‐specific propagative patterns are linked to schizophrenia. More specifically, we find significant group differences in multiple dynamic parameters between patients with schizophrenia and controls within four large‐scale networks: default mode, temporal lobe, subcortical, and visual network. Individuals with schizophrenia spend more time in certain propagating states. In summary, this study introduces a promising general approach to exploring the spatial propagation in dynamic states of brain networks and their associated complexity and reveals novel insights into the neurobiology of schizophrenia. 
    more » « less