Abstract We show that coherent laser networks (CLNs) exhibit emergent neural computing capabilities. The proposed scheme is built on harnessing the collective behavior of laser networks for storing a number of phase patterns as stable fixed points of the governing dynamical equations and retrieving such patterns through proper excitation conditions, thus exhibiting an associative memory property. It is discussed that despite the large storage capacity of the network, the large overlap between fixed-point patterns effectively limits pattern retrieval to only two images. Next, we show that this restriction can be uplifted by using nonreciprocal coupling between lasers and this allows for utilizing a large storage capacity. This work opens new possibilities for neural computation with coherent laser networks as novel analog processors. In addition, the underlying dynamical model discussed here suggests a novel energy-based recurrent neural network that handles continuous data as opposed to Hopfield networks and Boltzmann machines that are intrinsically binary systems.
more »
« less
This content will become publicly available on April 21, 2026
Physical Considerations in Memory and Information Storage
Information is an important resource. Storing and retrieving information faithfully are huge challenges and many methods have been developed to understand the principles behind robust information processing. In this review, we focus on information storage and retrieval from the perspective of energetics, dynamics, and statistical mechanics. We first review the Hopfield model of associative memory, the classic energy-based model of memory. We then discuss generalizations and physical realizations of the Hopfield model. Finally, we highlight connections to energy-based neural networks used in deep learning. We hope this review inspires new directions along the lines of information storage and retrieval in physical systems.
more »
« less
- Award ID(s):
- 2317138
- PAR ID:
- 10595344
- Publisher / Repository:
- Annual Reviews
- Date Published:
- Journal Name:
- Annual Review of Physical Chemistry
- Volume:
- 76
- Issue:
- 1
- ISSN:
- 0066-426X
- Page Range / eLocation ID:
- 471 to 495
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Retrieval-augmented generation (RAG) services are rapidly gaining adoption in enterprise settings as they combine information retrieval systems (e.g., databases) with large language models (LLMs) to enhance response generation and reduce hallucinations. By augmenting an LLM’s fixed pre-trained knowledge with real-time information retrieval, RAG enables models to effectively extend their context to large knowledge bases by selectively retrieving only the most relevant information. As a result, RAG provides the effect of dynamic updates to the LLM’s knowledge without requiring expensive and time-consuming retraining. While some deployments keep the entire database in memory, RAG services are increasingly shifting toward persistent storage to accommodate ever-growing knowledge bases, enhance utility, and improve cost-efficiency. However, this transition fundamentally reshapes the system’s performance profile: empirical analysis reveals that the Search & Retrieval phase emerges as the dominant contributor to end-to-end latency. This phase typically involves (1) running a smaller language model to generate query embeddings, (2) executing similarity and relevance checks over varying data structures, and (3) performing frequent, long-latency accesses to persistent storage. To address this triad of challenges, we propose a metamorphic in-storage accelerator architecture that provides the necessary programmability to support diverse RAG algorithms, dynamic data structures, and varying computational patterns. The architecture also supports in-storage execution of smaller language models for query embedding generation while final LLM generation is executed on DGX A100 systems. Experimental results show up to 4.3 × and 1.5 × improvement in end-to-end throughput compared to conventional retrieval pipelines using Xeon CPUs with NVMe storage and A100 GPUs with DRAM, respectively.more » « less
-
Abstract The exponential growth of information stored in data centers and computational power required for various data-intensive applications, such as deep learning and AI, call for new strategies to improve or move beyond the traditional von Neumann architecture. Recent achievements in information storage and computation in the optical domain, enabling energy-efficient, fast, and high-bandwidth data processing, show great potential for photonics to overcome the von Neumann bottleneck and reduce the energy wasted to Joule heating. Optically readable memories are fundamental in this process, and while light-based storage has traditionally (and commercially) employed free-space optics, recent developments in photonic integrated circuits (PICs) and optical nano-materials have opened the doors to new opportunities on-chip. Photonic memories have yet to rival their electronic digital counterparts in storage density; however, their inherent analog nature and ultrahigh bandwidth make them ideal for unconventional computing strategies. Here, we review emerging nanophotonic devices that possess memory capabilities by elaborating on their tunable mechanisms and evaluating them in terms of scalability and device performance. Moreover, we discuss the progress on large-scale architectures for photonic memory arrays and optical computing primarily based on memory performance.more » « less
-
Compression and efficient storage ofneural network (NN)parameters is critical for applications that run on resource-constrained devices. Despite the significant progress in NN model compression, there has been considerably less investigation in the actualphysicalstorage of NN parameters. Conventionally, model compression and physical storage are decoupled, as digital storage media witherror-correcting codes (ECCs)provide robust error-free storage. However, this decoupled approach is inefficient as it ignores the overparameterization present in most NNs and forces the memory device to allocate the same amount of resources to every bit of information regardless of its importance. In this work, we investigate analog memory devices as an alternative to digital media – one that naturally provides a way to add more protection for significant bits unlike its counterpart, but is noisy and may compromise the stored model’s performance if used naively. We develop a variety of robust coding strategies for NN weight storage on analog devices, and propose an approach to jointly optimize model compression and memory resource allocation. We then demonstrate the efficacy of our approach on models trained on MNIST, CIFAR-10, and ImageNet datasets for existing compression techniques. Compared to conventional error-free digital storage, our method reduces the memory footprint by up to one order of magnitude, without significantly compromising the stored model’s accuracy.more » « less
-
null; Mangun, G.R.; Gazzaniga, M.S. (Ed.)The human ability to remember unique experiences from many years ago comes so naturally that we often take it for granted. It depends on three stages: (1) encoding, when new information is initially registered, (2) storage, when encoded information is held in the brain, and (3) retrieval, when stored information is used. Historically, cognitive neuroscience studies of memory have emphasized encoding and retrieval. Yet, the intervening stage may hold the most intrigue, and has become a major research focus in the years since the last edition of this book. Here we describe recent investigations of post-acquisition memory processing in relation to enduring storage. This evidence of memory processing belies the notion that memories stored in the brain are held in stasis, without changing. Various methods for influencing and monitoring brain activity have been applied to study offline memory processing. In particular, memories can be reactivated during sleep and during resting periods, with distinctive physiological correlates. These neural signals shed light on the contribution of hippocampal-neocortical interactions to memory consolidation. Overall, results converge on the notion that memory reactivation is a critical determinant of systems-level consolidation, and thus of future remembering, which in turn facilitates future planning and problem solving.more » « less
An official website of the United States government
