skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2123862

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The learning and recognition of object features from unregulated input has been a longstanding challenge for artificial intelligence systems. Brains, on the other hand, are adept at learning stable sensory representations given noisy observations, a capacity mediated by a cascade of signal conditioning steps informed by domain knowledge. The olfactory system, in particular, solves a source separation and denoising problem compounded by concentration variability, environmental interference, and unpredictably correlated sensor affinities using a plastic network that requires statistically well-behaved input. We present a data-blind neuromorphic signal conditioning strategy, based on the biological system architecture, that normalizes and quantizes analog data into spike-phase representations, thereby transforming uncontrolled sensory input into a regular form with minimal information loss. Normalized input is delivered to a column of spiking principal neurons via heterogeneous synaptic weights; this gain diversification strategy regularizes neuronal utilization, yoking total activity to the network’s operating range and rendering internal representations robust to uncontrolled open-set stimulus variance. To dynamically optimize resource utilization while balancing activity regularization and resolution, we supplement this mechanism with a data-aware calibration strategy in which the range and density of the quantization weights adapt to accumulated input statistics. 
    more » « less
    Free, publicly-accessible full text available December 1, 2026
  2. Abstract High-bandwidth applications, from multi-gigabit communication and high-performance computing to radar signal processing, demand ever-increasing processing speeds. However, they face limitations in signal sampling and computation due to hardware and power constraints. In the microwave regime, where operating frequencies exceed the fastest clock rates, direct sampling becomes difficult, prompting interest in neuromorphic analog computing systems. We present the first demonstration of direct broadband frequency domain computing using an integrated circuit that replaces traditional analog and digital interfaces. This features a Microwave Neural Network (MNN) that operates on signals spanning tens of gigahertz, yet reprogrammed with slow, 150 MBit/sec control bitstreams. By leveraging significant nonlinearity in coupled microwave oscillators, features learned from a wide bandwidth are encoded in a comb-like spectrum spanning only a few gigahertz, enabling easy inference. We find that the MNN can search for bit sequences in arbitrary, ultra-broadband10 GBit/sec digital data, demonstrating suitability for high-speed wireline communication.Notably, it can emulate high-level digital functions without custom on-chip circuits, potentially replacing power-hungry sequential logic architectures. Its ability to track frequency changes over long capture times also allows for determining flight trajectories from radar returns. Furthermore, it serves as an accelerator for radio-frequency machine learning, capable of accurately classifying various encoding schemes used in wireless communication. The MNN achieves true, reconfigurable broadband computation, which has not yet been demonstrated by classical analog modalities, quantum reservoir computers using superconducting circuits, or photonic tensor cores, and avoidsthe inefficiencies of electro-optic transduction. Its sub-wavelength footprint in a Complementary Metal-Oxide-Semiconductor process and sub-200 milliwatt power consumption enable seamless integration as a general-purpose analog neural processor in microwave and digital signal processing chips. 
    more » « less
  3. In their Comment, Dennler et al.1 submit that they have discovered limitations affecting some of the conclusions drawn in our 2020 paper, ‘Rapid online learning and robust recall in a neuromorphic olfactory circuit’2. Specifically, they assert (1) that the public dataset we used suffers from sensor drift and a non-randomized measurement protocol, (2) that our neuromorphic external plexiform layer (EPL) network is limited in its ability to generalize over repeated presentations of an odourant, and (3) that our EPL network results can be performance matched by using a more computationally efficient distance measure. Although they are correct in their description of the limitations of that public dataset3, they do not acknowledge in their first two assertions how our utilization of those data sidestepped these limitations. Their third claim arises from flaws in the method used to generate their distance measure. We respond below to each of these three claims in turn. 
    more » « less
  4. The goal of odor source separation and identification from real-world data presents a challenging problem. Both individual odors of potential interest and multisource odor scenes constitute linear combinations of analytes present at different concentrations. The mixing of these analytes can exert nonlinear and even nonmonotonic effects on cross-responsive chemosensors, effectively occluding diagnostic activity patterns across the array. Neuromorphic algorithms, inspired by specific computational strategies of the mammalian olfactory system, have been trained to rapidly learn and reconstruct arbitrary odor source signatures in the presence of background interference. However, such networks perform best when tuned to the statistics of well-behaved inputs, normalized and predictable in their activity distributions. Deployment of chemosensor arrays in the wild exposes these networks to disruptive effects that exceed these tolerances. To address the problems inherent to chemosensory signal conditioning and representation learning, the olfactory bulb deploys an array of strategies: (1) shunting inhibition in the glomerular layer implements divisive normalization, contributing to concentration-invariant representations; (2) feedforward gain diversification (synaptic weight heterogeneity) regularizes spiking activity in the external plexiform layer (mitral and granule cells), enabling the network to handle unregulated inputs; (3) gamma-band oscillations segment activity into packets, enabling a spike phase code and iterative denoising; (4) excitatory and inhibitory spike timing dependent learning rules induce hierarchical attraction basins, enabling the network to map its highly complex inputs to regions of a lower dimensional manifold; (5) neurogenesis in the granule cell layer enables lifelong learning and prevents order effects (regularizing the learned synaptic weight distribution over the span of training). Here, we integrate these motifs into a single neuromorphic model, bringing together prior OB-inspired model architectures. In a series of simulation experiments including real-world data from a chemosensor array, we demonstrate the network’s ability to learn and detect complex odorants in variable environments despite unpredictable noise distributions. 
    more » « less
  5. The rapidly increasing size of deep-learning models has renewed interest in alternatives to digital-electronic computers as a means to dramatically reduce the energy cost of running state-of-the-art neural networks. Optical matrix-vector multipliers are best suited to performing computations with very large operands, which suggests that large Transformer models could be a good target for them. In this paper, we investigate---through a combination of simulations and experiments on prototype optical hardware---the feasibility and potential energy benefits of running Transformer models on future optical accelerators that perform matrix-vector multiplication. We use simulations, with noise models validated by small-scale optical experiments, to show that optical accelerators for matrix-vector multiplication should be able to accurately run a typical Transformer architecture model for language processing. We demonstrate that optical accelerators can achieve the same (or better) perplexity as digital-electronic processors at 8-bit precision, provided that the optical hardware uses sufficiently many photons per inference, which translates directly to a requirement on optical energy per inference. We studied numerically how the requirement on optical energy per inference changes as a function of the Transformer width $$d$$ and found that the optical energy per multiply--accumulate (MAC) scales approximately as $$\frac{1}{d}$$, giving an asymptotic advantage over digital systems. We also analyze the total system energy costs for optical accelerators running Transformers, including both optical and electronic costs, as a function of model size. We predict that well-engineered, large-scale optical hardware should be able to achieve a $$100 \times$$ energy-efficiency advantage over current digital-electronic processors in running some of the largest current Transformer models, and if both the models and the optical hardware are scaled to the quadrillion-parameter regime, optical accelerators could have a $$>8,000\times$$ energy-efficiency advantage. Under plausible assumptions about future improvements to electronics and Transformer quantization techniques (5× cheaper memory access, double the digital--analog conversion efficiency, and 4-bit precision), we estimate that the energy advantage for optical processors versus electronic processors operating at 300~fJ/MAC could grow to $$>100,000\times$$. 
    more » « less