Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior.
more »
« less
Columnar Learning Networks for Multisensory Spatiotemporal Learning
Network features found in the brain may help implement more efficient and robust neural networks. Spiking neural networks (SNNs) process spikes in the spatiotemporal domain and can offer better energy efficiency than deep neural networks. However, most SNN implementations rely on simple point neurons that neglect the rich neuronal and dendritic dynamics. Herein, a bio‐inspired columnar learning network (CLN) structure that employs feedforward, lateral, and feedback connections to make robust classification with sparse data is proposed. CLN is inspired by the mammalian neocortex, comprising cortical columns each containing multiple minicolumns formed by interacting pyramidal neurons. A column continuously processes spatiotemporal signals from its sensor, while learning spatial and temporal correlations between features in different regions of an object along with the sensor's movement through sensorimotor interaction. CLN can be implemented using memristor crossbars with a local learning rule, spiking timing‐dependent plasticity (STDP), which can be natively obtained in second‐order memristors. CLN allows inputs from multiple sensors to be simultaneously processed by different columns, resulting in higher classification accuracy and better noise tolerance. Analysis of networks implemented on memristor crossbars shows that the system can operate at very low power and high throughput, with high accuracy and robustness to noise.
more »
« less
- PAR ID:
- 10381618
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Advanced Intelligent Systems
- Volume:
- 4
- Issue:
- 11
- ISSN:
- 2640-4567
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Neuromorphic hardware, designed to mimic the neural structure of the human brain, offers an energy-efficient platform for implementing machine-learning models in the form of Spiking Neural Networks (SNNs). Achieving efficient SNN execution on this hardware requires careful consideration of various objectives, such as optimizing utilization of individual neuromorphic cores and minimizing inter-core communication. Unlike previous approaches that overlooked the architecture of the neuromorphic core when clustering the SNN into smaller networks, our approach uses architecture-aware algorithms to ensure that the resulting clusters can be effectively mapped to the core. We base our approach on a crossbar architecture for each neuromorphic core. We start with a basic architecture where neurons can only be mapped to the columns of the crossbar. Our technique partitions the SNN into clusters of neurons and synapses, ensuring that each cluster fits within the crossbar's confines, and when multiple clusters are allocated to a single crossbar, we maximize resource utilization by efficiently reusing crossbar resources. We then expand this technique to accommodate an enhanced architecture that allows neurons to be mapped not only to the crossbar's columns but also to its rows, with the aim of further optimizing utilization. To evaluate the performance of these techniques, assuming a multi-core neuromorphic architecture, we assess factors such as the number of crossbars used and the average crossbar utilization. Our evaluation includes both synthetically generated SNNs and spiking versions of well-known machine-learning models: LeNet, AlexNet, DenseNet, and ResNet. We also investigate how the structure of the SNN impacts solution quality and discuss approaches to improve it.more » « less
-
We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. We first consider a deterministic implementation of a neural timer and show that Θ(logt) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(loglog1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{loglog1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks.more » « less
-
We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuroscience. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds.We first consider a deterministic implementation of a neural timer and show that Θ(logt)(deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2 t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(log log 1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{log log 1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks.Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS’90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time.more » « less
-
Time-to-first-spike(TTFS ) encoded spiking neural networks (SNNs), implemented using memristive crossbar arrays (MCA), achieve higher inference speed and energy efficiency compared to artificial neural networks (ANNs) and rate encoded SNNs. However, memristive crossbar arrays are vulnerable to conductance variations in the embedded memristor cells. These degrade the performance of TTFS encoded SNNs, namely their classification accuracy with adverse impact on the yield of manufactured chips. To combat this yield loss, we propose a post-manufacture testing and tuning framework for these SNNs. In the testing phase, a timing encoded signature of the SNN, which is statistically correlated to the SNN performance, is extracted. In the tuning phase, this signature is mapped to optimal values of the tuning knobs (gain parameters), one parameter per layer, using a trained regressor, allowing very fast tuning (about 150ms). To further reduce the tuning overhead, we rank order hidden layer neurons based on their criticality and show that adding gain programmability only to 50% of the neurons is sufficient for performance recovery. Experiments show that the proposed framework can improve yield by up to 34% and average accuracy of memristive SNNs by up to 9%.more » « less