In many real-world applications, fully-differentiable RNNs such as LSTMs and GRUs have been widely deployed to solve time series learning tasks. These networks train via Backpropagation Through Time, which can work well in practice but involves a biologically unrealistic unrolling of the network in time for gradient updates, are computationally expensive, and can be hard to tune. A second paradigm, Reservoir Computing, keeps the recurrent weight matrix fixed and random. Here, we propose a novel hybrid network, which we call Hybrid Backpropagation Parallel Echo State Network (HBP-ESN) which combines the effectiveness of learning random temporal features of reservoirs with the readout power of a deep neural network with batch normalization. We demonstrate that our new network outperforms LSTMs and GRUs, including multi-layer "deep" versions of these networks, on two complex real-world multi-dimensional time series datasets: gesture recognition using skeleton keypoints from ChaLearn, and the DEAP dataset for emotion recognition from EEG measurements. We show also that the inclusion of a novel meta-ring structure, which we call HBP-ESN M-Ring, achieves similar performance to one large reservoir while decreasing the memory required by an order of magnitude. We thus offer this new hybrid reservoir deep learning paradigm as a new alternative directionmore »
Hierarchical architectures in reservoir computing systems
Abstract Reservoir computing (RC) offers efficient temporal data processing with a low training cost by separating recurrent neural networks into a fixed network with recurrent connections and a trainable linear network. The quality of the fixed network, called reservoir, is the most important factor that determines the performance of the RC system. In this paper, we investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system. Analogous to deep neural networks, stacking sub-reservoirs in series is an efficient way to enhance the nonlinearity of data transformation to high-dimensional space and expand the diversity of temporal information captured by the reservoir. These deep reservoir systems offer better performance when compared to simply increasing the size of the reservoir or the number of sub-reservoirs. Low frequency components are mainly captured by the sub-reservoirs in later stage of the deep reservoir structure, similar to observations that more abstract information can be extracted by layers in the late stage of deep neural networks. When the total size of the reservoir is fixed, tradeoff between the number of sub-reservoirs and the size of each sub-reservoir needs to be carefully considered, due to the degraded more »
- Publication Date:
- NSF-PAR ID:
- 10329451
- Journal Name:
- Neuromorphic Computing and Engineering
- Volume:
- 1
- Issue:
- 1
- Page Range or eLocation-ID:
- 014006
- ISSN:
- 2634-4386
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Obeid, Iyad Selesnick (Ed.)Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEGmore »
-
The electrical signals triggering the heart's contraction are governed by non-linear processes that can produce complex irregular activity, especially during or preceding the onset of cardiac arrhythmias. Forecasts of cardiac voltage time series in such conditions could allow new opportunities for intervention and control but would require efficient computation of highly accurate predictions. Although machine-learning (ML) approaches hold promise for delivering such results, non-linear time-series forecasting poses significant challenges. In this manuscript, we study the performance of two recurrent neural network (RNN) approaches along with echo state networks (ESNs) from the reservoir computing (RC) paradigm in predicting cardiac voltage data in terms of accuracy, efficiency, and robustness. We show that these ML time-series prediction methods can forecast synthetic and experimental cardiac action potentials for at least 15–20 beats with a high degree of accuracy, with ESNs typically two orders of magnitude faster than RNN approaches for the same network size.
-
Table of Contents: Foreword by the CI 2016 Workshop Chairs …………………………………vi Foreword by the CI 2016 Steering Committee ..…………………………..…..viii List of Organizing Committee ………………………….……....x List of Registered Participants .………………………….……..xi Acknowledgement of Sponsors ……………………………..…xiv Hackathon and Workshop Agenda .………………………………..xv Hackathon Summary .………………………….…..xviii Invited talks - abstracts and links to presentations ………………………………..xxi Proceedings: 34 short research papers ……………………………….. 1-135 Papers 1. BAYESIAN MODELS FOR CLIMATE RECONSTRUCTION FROM POLLEN RECORDS ..................................... 1 Lasse Holmström, Liisa Ilvonen, Heikki Seppä, Siim Veski 2. ON INFORMATION CRITERIA FOR DYNAMIC SPATIO-TEMPORAL CLUSTERING ..................................... 5 Ethan D. Schaeffer, Jeremy M. Testa, Yulia R. Gel, Vyacheslav Lyubchich 3. DETECTING MULTIVARIATE BIOSPHERE EXTREMES ..................................... 9 Yanira Guanche García, Erik Rodner, Milan Flach, Sebastian Sippel, Miguel Mahecha, Joachim Denzler 4. SPATIO-TEMPORAL GENERATIVE MODELS FOR RAINFALL OVER INDIA ..................................... 13 Adway Mitra 5. A NONPARAMETRIC COPULA BASED BIAS CORRECTION METHOD FOR STATISTICAL DOWNSCALING ..................................... 17 Yi Li, Adam Ding, Jennifer Dy 6. DETECTING AND PREDICTING BEAUTIFUL SUNSETS USING SOCIAL MEDIA DATA ..................................... 21 Emma Pierson 7. OCEANTEA: EXPLORING OCEAN-DERIVED CLIMATE DATA USING MICROSERVICES ..................................... 25 Arne N. Johanson, Sascha Flögel, Wolf-Christian Dullo, Wilhelm Hasselbring 8. IMPROVED ANALYSIS OF EARTH SYSTEM MODELS AND OBSERVATIONS USING SIMPLE CLIMATE MODELS ..................................... 29 Balu Nadiga, Nathanmore »
-
With the success of deep neural networks (DNN), many recent works have been focusing on developing hardware accelerator for power and resource-limited embedded system via model compression techniques, such as quantization, pruning, low-rank approximation, etc. However, almost all existing DNN structure is fixed after deployment, which lacks runtime adaptive DNN structure to adapt to its dynamic hardware resource, power budget, throughput requirement, as well as dynamic workload. Correspondingly, there is no runtime adaptive hardware platform to support dynamic DNN structure. To address this problem, we first propose a dynamic channel-adaptive deep neural network (CA-DNN) which can adjust the involved convolution channel (i.e. model size, computing load) at run-time (i.e. at inference stage without retraining) to dynamically trade off between power, speed, computing load and accuracy. Further, we utilize knowledge distillation method to optimize the model and quantize the model to 8-bits and 16-bits, respectively, for hardware friendly mapping. We test the proposed model on CIFAR-10 and ImageNet dataset by using ResNet. Comparing with the same model size of individual model, our CA-DNN achieves better accuracy. Moreover, as far as we know, we are the first to propose a Processing-in-Memory accelerator for such adaptive neural networks structure based on Spin Orbitmore »