With the proliferation of low-cost sensors and the Internet of Things, the rate of producing data far exceeds the compute and storage capabilities of today’s infrastructure. Much of this data takes the form of time series, and in response, there has been increasing interest in the creation of time series archives in the last decade, along with the development and deployment of novel analysis methods to process the data. The general strategy has been to apply a plurality of similarity search mechanisms to various subsets and subsequences of time series data in order to identify repeated patterns and anomalies; however, the computational demands of these approaches renders them incompatible with today’s power-constrained embedded CPUs. To address this challenge, we present FA-LAMP, an FPGA-accelerated implementation of the Learned Approximate Matrix Profile (LAMP) algorithm, which predicts the correlation between streaming data sampled in real-time and a representative time series dataset used for training. FA-LAMP lends itself as a real-time solution for time series analysis problems such as classification. We present the implementation of FA-LAMP on both edge- and cloud-based prototypes. On the edge devices, FA-LAMP integrates accelerated computation as close as possible to IoT sensors, thereby eliminating the need to transmit and store data in the cloud for posterior analysis. On the cloud-based accelerators, FA-LAMP can execute multiple LAMP models on the same board, allowing simultaneous processing of incoming data from multiple data sources across a network. LAMP employs a Convolutional Neural Network (CNN) for prediction. This work investigates the challenges and limitations of deploying CNNs on FPGAs using the Xilinx Deep Learning Processor Unit (DPU) and the Vitis AI development environment. We expose several technical limitations of the DPU, while providing a mechanism to overcome them by attaching custom IP block accelerators to the architecture. We evaluate FA-LAMP using a low-cost Xilinx Ultra96-V2 FPGA as well as a cloud-based Xilinx Alveo U280 accelerator card and measure their performance against a prototypical LAMP deployment running on a Raspberry Pi 3, an Edge TPU, a GPU, a desktop CPU, and a server-class CPU. In the edge scenario, the Ultra96-V2 FPGA improved performance and energy consumption compared to the Raspberry Pi; in the cloud scenario, the server CPU and GPU outperformed the Alveo U280 accelerator card, while the desktop CPU achieved comparable performance; however, the Alveo card offered an order of magnitude lower energy consumption compared to the other four platforms. Our implementation is publicly available at https://github.com/aminiok1/lamp-alveo.
more »
« less
FA-LAMP: FPGA-Accelerated Learned Approximate Matrix Profile for Time Series Similarity Prediction
With the proliferation of low-cost sensors and the Internet-of-Things (IoT), the rate of producing data far exceeds the compute and storage capabilities of today’s infrastructure. Much of this data takes the form of time series, and in response, there has been increasing interest in the creation of time series archives in the last decade, along with the development and deployment of novel analysis methods to process the data. The general strategy has been to apply a plurality of similarity search mechanisms to various subsets and subsequences of time series data in order to identify repeated patterns and anomalies; however, the computational demands of these approaches renders them incompatible with today’s power-constrained embedded CPUs. To address this challenge, we present FA-LAMP, an FPGA-accelerated implementation of the Learned Approximate Matrix Profile (LAMP) algorithm, which predicts the correlation between streaming data sampled in real-time and a representative time series dataset used for training. FA-LAMP lends itself as a real-time solution for time series analysis problems such as classification and anomaly detection, among others. FA-LAMP provides a mechanism to integrate accelerated computation as close as possible to IoT sensors, thereby eliminating the need to transmit and store data in the cloud for posterior analysis. At its core, LAMP and FA-LAMP employ Convolution Neural Networks (CNNs) to perform prediction. This work investigates the challenges and limitations of deploying CNNs on FPGAs when using state-of-the-art commercially-supported frameworks built for this purpose, namely, the Xilinx Deep Learning Processor Unit (DPU) overlay and the Vitis AI development environment. This work exposes several technical limitations of the DPU, while providing a mechanism to overcome these limits by attaching our own hand-optimized IP block accelerators to the DPU overlay. We evaluate FA-LAMP using a low-cost Xilinx Ultra96-V2 FPGA, demonstrating performance and energy improvements of more than an order of magnitude compared to a prototypical LAMP deployment running on a Raspberry Pi 3. Our implementation is publicly available at https://github.com/fccm2021sub/fccm-lamp.
more »
« less
- PAR ID:
- 10282635
- Date Published:
- Journal Name:
- International Symposium on Field-Programmable Custom Computing Machines
- Page Range / eLocation ID:
- 40 to 49
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
There have been many recent attempts to extend the successes of convolutional neural networks (CNNs) from 2-dimensional (2D) image classification to 3-dimensional (3D) video recognition by exploring 3D CNNs. Considering the emerging growth of mobile or Internet of Things (IoT) market, it is essential to investigate the deployment of 3D CNNs on edge devices. Previous works have implemented standard 3D CNNs (C3D) on hardware platforms, however, they have not exploited model compression for acceleration of inference. This work proposes a hardware-aware pruning approach that can fully adapt to the loop tiling technique of FPGA design and is applied onto a novel 3D network called R(2+1)D. Leveraging the powerful ADMM, the proposed pruning method achieves simultaneous high accuracy and significant acceleration of computation on FPGA. With layer-wise pruning rates up to 10× and negligible accuracy loss, the pruned model is implemented on a Xilinx ZCU102 FPGA board, where the pruned model achieves 2.6× speedup compared with the unpruned version, and 2.3× speedup and 2.3× power efficiency improvement compared with state-of-the-art FPGA implementation of C3D.more » « less
-
Chaos is an interesting phenomenon for nonlinear systems that emerges due to its complex and unpredictable behavior. With the escalated use of low-powered edge-compute devices, data security at the edge develops the need for security in communication. The characteristic that Chaos synchronizes over time for two different chaotic systems with their own unique initial conditions, is the base for chaos implementation in communication. This paper proposes an encryption architecture suitable for communication of on-chip sensors to provide a POC (proof of concept) with security encrypted on the same chip using different chaotic equations. In communication, encryption is achieved with the help of microcontrollers or software implementations that use more power and have complex hardware implementation. The small IoT devices are expected to be operated on low power and constrained with size. At the same time, these devices are highly vulnerable to security threats, which elevates the need to have low power/size hardware-based security. Since the discovery of chaotic equations, they have been used in various encryption applications. The goal of this research is to take the chaotic implementation to the CMOS level with the sensors on the same chip. The hardware co-simulation is demonstrated on an FPGA board for Chua encryption/decryption architecture. The hardware utilization for Lorenz, SprottD, and Chua on FPGA is achieved with Xilinx System Generation (XSG) toolbox which reveals that Lorenz’s utilization is ~9% lesser than Chua’s.more » « less
-
Object tracking has achieved great advances in the past few years and has been widely applied in vision-based application. Nowadays, deep convolutional neural network has taken an important role in object tracking tasks. However, its enormous model size and massive computation cost have became the main obstacle for deployment of such powerful algorithm in low power and resource limited embedded system, such as FPGA. Due to the popularization of the power-sensitive mobile platform, low power real-time tracking solution is strongly required. In order to address these challenges, we propose a low power and energy-efficient object tracking FPGA implementation based on a newly proposed binarized depthwise separable deep convolutional neural network. It can significantly reduce the model size and computation complexity simultaneously utilizing binarized (i.e., +1 and -1) depthwise separable convolution kernel and our proposed trainable threshold group binarization activation function. It can completely converts the dot product and accumulation based convolution operations into bit-wise XNOR and bit-count operations, while achieving state-of-the-art accuracy. Our proposed binarized depthwise separable model achieves ~57% Intersection over Union (IOU) on DJI object tracking dataset with only ~143.9Kb model parameter size. We then deploy our proposed model into the Xilinx PYNQ Z1 board with only 4.9Mb on-chip RAM. The experiment results show that our FPGA implementation achieves 11.1 frames per second for object tracking with only 2.61W.more » « less
-
A fault-tolerant quantum computer must decode and correct errors faster than they appear to prevent exponential slowdown due to error correction. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than $O(d^3)$. We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has a sublinear average time complexity with regard to $$d$$, given $O(d^3)$ parallel computing resources. The decoding time per measurement round decreases as $$d$$ increases, the first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. Using a Xilinx VCU129 FPGA, we successfully implement $$d$$ up to 21 with an average decoding time of 11.5 ns per measurement round under 0.1\% phenomenological noise, and 23.7 ns for $$d=17$$ under equivalent circuit-level noise. This performance is significantly faster than any existing decoder implementation. Furthermore, we show that \name can optimize for resource efficiency by decoding $$d=51$ on a Xilinx VCU129 FPGA with an average latency of 544 ns per measurement round.more » « less
An official website of the United States government

