skip to main content


Search for: All records

Award ID contains: 1639995

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. As a model of recurrent spiking neural networks, the Liquid State Machine (LSM) offers a powerful brain-inspired computing platform for pattern recognition and machine learning applications. While operated by processing neural spiking activities, the LSM naturally lends itself to an efficient hardware implementation via exploration of typical sparse firing patterns emerged from the recurrent neural network and smart processing of computational tasks that are orchestrated by different firing events at runtime. We explore these opportunities by presenting a LSM processor architecture with integrated on-chip learning and its FPGA implementation. Our LSM processor leverage the sparsity of firing activities to allow for efficient event-driven processing and activity-dependent clock gating. Using the spoken English letters adopted from the TI46 [1] speech recognition corpus as a benchmark, we show that the proposed FPGA-based neural processor system is up to 29% more energy efficient than a baseline LSM processor with little extra hardware overhead. 
    more » « less
  2. Collision avoidance is a key technology enabling applications such as autonomous vehicles and robots. Various reinforcement learning techniques such as the popular Q-learning algorithms have emerged as a promising solution for collision avoidance in robotics. While spiking neural networks (SNNs), the third generation model of neural networks, have gained increased interest due to their closer resemblance to biological neural circuits in the brain, the application of SNNs to mobile robot navigation has not been well studied. Under the context of reinforcement learning, this paper aims to investigate the potential of biologically-motivated spiking neural networks for goal-directed collision avoidance in reasonably complex environments. Unlike the existing additive reward-modulated spike timing dependent plasticity learning rule (A-RM-STDP), for the first time, we explore a new multiplicative RM-STDP scheme (M-RM-STDP) for the targeted application. Furthermore, we propose a more biologically plausible feed-forward spiking neural network architecture with fine-grained global rewards. Finally, by combining the above two techniques we demonstrate a further improved solution to collision avoidance. Our proposed approaches not only completely outperform Q-learning for cases where Q-learning can hardly reach the target without collision, but also significantly outperform a baseline SNN with A-RMSTDP in terms of both success rate and the quality of navigation trajectories. 
    more » « less
  3. The autonomous navigation of mobile robots in unknown environments is of great interest in mobile robotics. This article discusses a new strategy to navigate to a known target location in an unknown environment using a combination of the “go-to-goal” approach and reinforcement learning with biologically realistic spiking neural networks. While the “goto-goal” approach itself might lead to a solution for most environments, the added neural reinforcement learning in this work results in a strategy that takes the robot from a starting position to a target location in a near shortest possible time. To achieve the goal, we propose a reinforcement learning approach based on spiking neural networks. The presented biologically motivated delayed reward mechanism using eligibility traces results in a greedy approach that leads the robot to the target in a close to shortest possible time. 
    more » « less
  4. The Liquid State Machine (LSM) is a promising model of recurrent spiking neural networks. It consists of a fixed recurrent network, or the reservoir, which projects to a readout layer through plastic readout synapses. The classification performance is highly dependent on the training of readout synapses which tend to be very dense and contribute significantly to the overall network complexity. We present a unifying biologically inspired calcium-modulated supervised spike-timing dependent plasticity (STDP) approach to training and sparsification of readout synapses, where supervised temporal learning is modulated by the post-synaptic firing level characterized by the post-synaptic calcium concentration. The proposed approach prevents synaptic weight saturation, boosts learning performance, and sparsifies the connectivity between the reservoir and readout layer. Using the recognition rate of spoken English letters adopted from the TI46 speech corpus as a measure of performance, we demonstrate that the proposed approach outperforms a baseline supervised STDP mechanism by up to 25%, and a competitive non-STDP spike-dependent training algorithm by up to 2.7%. Furthermore, it can prune out up to 30% of readout synapses without causing significant performance degradation. 
    more » « less