In the IoT and smart systems era, the massive amount of data generated from various IoT and smart devices are often sent directly to the cloud infrastructure for processing, analyzing, and storing. While handling this big data, conventional cloud infrastructure encounters many challenges, e.g., scarce bandwidth, high latency, real-time constraints, high power, and privacy issues. The edge-centric computing is transpiring as a synergistic solution to address these issues of cloud computing, by enabling processing/analyzing the data closer to the source of the data or at the network’s edge. This in turn allows real-time and in-situ data analytics and processing, which is imperative for many real-world IoT and smart systems, such as smart cars. Since the edge computing is still in its infancy, innovative solutions, models, and techniques are needed to support real-time and in-situ data processing and analysis of edge computing platforms. In this research work, we introduce a novel, unique, and efficient FPGA-HLS-based hardware accelerator for PCA+SVM model for real-time processing and analysis on edge computing platforms. This is inspired by our previous work on PCA+SVM models for edge computing applications. It was demonstrated that the amalgamation of principal component analysis (PCA) and support vector machines (SVM) leads to high classification accuracy in many fields. Also, machine learning techniques, such as SVM, can be utilized for many edge tasks, e.g. anomaly detection, health monitoring, etc.; and dimensionality reduction techniques, such as PCA, are often used to reduce the data size, which in turn vital for memory-constrained edge devices/platforms. Furthermore, our previous works demonstrated that FPGA’s many traits, including parallel processing abilities, low latency, and stable throughput despite the workload, make FPGAs suitable for real-time processing of edge computing applications/platforms. Our proposed FPGA-HLS-based PCA+SVM hardware IP achieves up to 254x speedup compared to its embedded software counterpart, while maintaining small area and low power requirements of edge computing applications. Our experimental results show great potential in utilizing FPGA-based architectures to support real-time processing on edge computing applications.
more »
« less
Two Watts is all you need: enabling in-detector real-time machine learning for neutrino telescopes via edge computing
The use of machine learning techniques has significantly increased the physics discovery potential of neutrino telescopes. In the upcoming years, we are expecting upgrades of currently existing detectors and new telescopes with novel experimental hardware, yielding more statistics as well as more complicated data signals. This calls for an upgrade on the software side needed to handle this more complex data in a more efficient way. Specifically, we seek low power and fast software methods to achieve real-time signal processing, where current machine learning methods are too expensive to be deployed in the resource-constrained regions where these experiments are located. We present the first attempt at and a proof-of-concept for enabling machine learning methods to be deployed in-detector for water/ice neutrino telescopes via quantization and deployment on Google Edge Tensor Processing Units (TPUs). We design a recursive neural network with a residual convolutional embedding and adapt a quantization process to deploy the algorithm on a Google Edge TPU. This algorithm can achieve similar reconstruction accuracy compared with traditional GPU-based machine learning solutions while requiring the same amount of power compared with CPU-based regression solutions, combining the high accuracy and low power advantages and enabling real-time in-detector machine learning in even the most power-restricted environments.
more »
« less
- Award ID(s):
- 2239795
- PAR ID:
- 10586425
- Publisher / Repository:
- IOPScience
- Date Published:
- Journal Name:
- Journal of cosmology and astroparticle physics
- ISSN:
- 1475-7516
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)With the recent advances in both machine learning and embedded systems research, the demand to deploy computational models for real-time execution on edge devices has increased substantially. Without deploying computational models on edge devices, the frequent transmission of sensor data to the cloud results in rapid battery draining due to the energy consumption of wireless data transmission. This rapid power dissipation leads to a considerable reduction in the battery lifetime of the system, therefore jeopardizing the real-world utility of smart devices. It is well-established that for difficult machine learning tasks, models with higher performance often require more computation power and thus are not power-efficient choices for deployment on edge devices. However, the trade-offs between performance and power consumption are not well studied. While numerous methods (e.g., model compression) have been developed to obtain an optimal model, these methods focus on improving the efficiency of a single model. In an entirely new direction, we introduce an effective method to find a combination of multiple models that are optimal in terms of power-efficiency and performance by solving an optimization problem in which both performance and power consumption are taken into account. Experimental results demonstrate that on the ImageNet dataset, we can achieve a 20% energy reduction with only 0.3% accuracy drop compared to Squeeze-and-Excitation Networks. Compared to a pruned convolutional neural network for human activity recognition, while consuming 1.7% less energy, our proposed policy achieves 1.3% higher accuracy.more » « less
-
Ever-growing edge applications often require short processing latency and high energy efficiency to meet strict timing and power budget. In this work, we propose that the compact long short-term memory (LSTM) model can approximate conventional acausal algorithms with reduced latency and improved efficiency for real-time causal prediction, especially for the neural signal processing in closed-loop feedback applications. We design an LSTM inference accelerator by taking advantage of the fine-grained parallelism and pipelined feedforward and recurrent updates. We also propose a bit-sparse quantization method that can reduce the circuit area and power consumption by replacing the multipliers with the bit-shift operators. We explore different combinations of pruning and quantization methods for energy-efficient LSTM inference on datasets collected from the electroencephalogram (EEG) and calcium image processing applications. Evaluation results show that our proposed LSTM inference accelerator can achieve 1.19 GOPS/mW energy efficiency. The LSTM accelerator with 2-sbit/16-bit sparse quantization and 60% sparsity can reduce the circuit area and power consumption by 54.1% and 56.3%, respectively, compared with a 16-bit baseline implementation.more » « less
-
As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied precision or quantization levels, and model compression techniques, there is a need for design space exploration frameworks that incorporate quantization-aware processing elements into the accelerator design space while having accurate and fast power, performance, and area models. In this work, we present QUIDAM , a highly parameterized quantization-aware DNN accelerator and model co-exploration framework. Our framework can facilitate future research on design space exploration of DNN accelerators for various design choices such as bit precision, processing element type, scratchpad sizes of processing elements, global buffer size, number of total processing elements, and DNN configurations. Our results show that different bit precisions and processing element types lead to significant differences in terms of performance per area and energy. Specifically, our framework identifies a wide range of design points where performance per area and energy varies more than 5 × and 35 ×, respectively. With the proposed framework, we show that lightweight processing elements achieve on par accuracy results and up to 5.7 × more performance per area and energy improvement when compared to the best INT16 based implementation. Finally, due to the efficiency of the pre-characterized power, performance, and area models, QUIDAM can speed up the design exploration process by 3-4 orders of magnitude as it removes the need for expensive synthesis and characterization of each design.more » « less
-
null (Ed.)Accessible machine learning algorithms, software, and diagnostic tools for energy-efficient devices and systems are extremely valuable across a broad range of application domains. In scientific domains, real-time near-sensor processing can drastically improve experimental design and accelerate scientific discoveries. To support domain scientists, we have developed hls4ml, an open-source software-hardware codesign workflow to interpret and translate machine learning algorithms for implementation with both FPGA and ASIC technologies. We expand on previous hls4ml work by extending capabilities and techniques towards low-power implementations and increased usability: new Python APIs, quantization-aware pruning, end-to-end FPGA workflows, long pipeline kernels for low power, and new device backends include an ASIC workflow. Taken together, these and continued efforts in hls4ml will arm a new generation of domain scientists with accessible, efficient, and powerful tools for machine-learning-accelerated discovery.more » « less
An official website of the United States government

