Calcium-modulated supervised spike-timing-dependent plasticity for readout training and sparsification of the liquid state machine
The Liquid State Machine (LSM) is a promising model of recurrent spiking neural networks. It consists of a fixed recurrent network, or the reservoir, which projects to a readout layer through plastic readout synapses. The classification performance is highly dependent on the training of readout synapses which tend to be very dense and contribute significantly to the overall network complexity. We present a unifying biologically inspired calcium-modulated supervised spike-timing dependent plasticity (STDP) approach to training and sparsification of readout synapses, where supervised temporal learning is modulated by the post-synaptic firing level characterized by the post-synaptic calcium concentration. The proposed approach prevents synaptic weight saturation, boosts learning performance, and sparsifies the connectivity between the reservoir and readout layer. Using the recognition rate of spoken English letters adopted from the TI46 speech corpus as a measure of performance, we demonstrate that the proposed approach outperforms a baseline supervised STDP mechanism by up to 25%, and a competitive non-STDP spike-dependent training algorithm by up to 2.7%. Furthermore, it can prune out up to 30% of readout synapses without causing significant performance degradation.