In this paper, we propose a novel accuracy-reconfigurable stochastic
computing (ARSC) framework for dynamic reliability and power
management. Different than the existing stochastic computing works,
where the accuracy versus power/energy trade-off is carried out in
the design time, the new ARSC design can change accuracy or
bit-width of the data in the run-time so that it can accommodate the
long-term aging effects by slowing the system clock frequency at
the cost of accuracy while maintaining the throughput of the
computing. We validate the ARSC concept on a discrete cosine
transformation (DCT) and inverse DCT designs for image
compressing/decompressing applications, which are implemented on
Xilinx Spartan-6 family XC6SLX45 platform. Experimental results show that the new design can easily mitigate the
long-term aging-induced effects by accuracy trade-off while
maintaining the throughput of the whole computing process using simple
frequency scaling. We further show that one-bit precision loss for the
input data, which translated to 3.44dB of the accuracy loss in term
of Peak Signal to Noise Ratio (PSNR) for images, we can sufficiently
compensate the NBTI induced aging effects in 10 years while maintaining
the pre-aging computing throughput of 7.19 frames per second. At the same
time, we can save 74\% power consumption by 10.67dB of accuracy loss.
The proposed ARSC computing
framework also allows much aggressive frequency scaling, which can lead to
order of magnitude power savings compared to the traditional dynamic
voltage and frequency scaling (DVFS) techniques.
more »
« less
Runtime Long-Term Reliability Management Using Stochastic Computing in Deep Neural Networks
In this paper, we propose a new dynamic reliability technique
using an accuracy-reconfigurable stochastic computing (ARSC) framework
for deep learning computing. Unlike the conventional stochastic computing
that conducts design time accuracy power/energy trade-off, the new ARSC
design can adjust the bit-width of the data in run time.
Hence, the ARSC can mitigate the long-term aging effects by slowing
the system clock frequency, while maintaining the inference throughput by
reducing the data bit-width at a small cost of accuracy. We show how to
implement the recently proposed counter-based SC multiplication and
bit-width reduction on a layer-wise quantization scheme for CNN networks
with dynamic fixed-point data. We validate an ARSC-based five-layer
convolutional neural network designs for the MNIST dataset based on Vivado
HLS with constraints from Xilinx Zynq-7000 family xc7z045 platform.
Experimental results show that new ARSC DNN can sufficiently compensate
the NBTI induced aging effects in 10 years with marginal classification
accuracy loss while maintaining or even exceeding the pre-aging computing
throughput. At the same time, the proposed ARSC
computing framework also reduces the active power consumption due to the
frequency scaling, which can further improve system reliability due to the
reduced temperature.
more »
« less
- PAR ID:
- 10279545
- Date Published:
- Journal Name:
- Proc. Int. Symposium. on Quality Electronic Design (ISQED’21)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep Convolution Neural Network (CNN) has achieved outstanding performance in image recognition over large scale dataset. However, pursuit of higher inference accuracy leads to CNN architecture with deeper layers and denser connections, which inevitably makes its hardware implementation demand more and more memory and computational resources. It can be interpreted as `CNN power and memory wall'. Recent research efforts have significantly reduced both model size and computational complexity by using low bit-width weights, activations and gradients, while keeping reasonably good accuracy. In this work, we present different emerging nonvolatile Magnetic Random Access Memory (MRAM) designs that could be leveraged to implement `bit-wise in-memory convolution engine', which could simultaneously store network parameters and compute low bit-width convolution. Such new computing model leverages the `in-memory computing' concept to accelerate CNN inference and reduce convolution energy consumption due to intrinsic logic-in-memory design and reduction of data communication.more » « less
-
null (Ed.)Multiply-accumulate (MAC) operations are common in data processing and machine learning but costly in terms of hardware usage. Stochastic Computing (SC) is a promising approach for low-cost hardware design of complex arithmetic operations such as multiplication. Computing with deterministic unary bit-streams (defined as bit-streams with all 1s grouped together at the beginning or end of a bit-stream) has been recently suggested to improve the accuracy of SC. Conventionally, SC designs use multiplexer (MUX) units or OR gates to accumulate data in the stochastic domain. MUX-based addition suffers from scaling of data and OR-based addition from inaccuracy. This work proposes a novel technique for MAC operation on unary bit-streamsthat allows exact, non-scaled addition of multiplication results. By introducing a relative delay between the products, we control correlation between bit-streams and eliminate OR-based addition error. We evaluate the accuracy of the proposed technique compared to the state-of-the-art MAC designs. After quantization, the proposed technique demonstrates at least 37% and up to 100% decrease of the mean absolute error for uniformly distributed random input values, compared to traditional OR-based MAC designs. Further, we demonstrate that the proposed technique is practical and evaluate area, power and energy of three possible implementations.more » « less
-
Sorting is a fundamental function in many applications from data processing to database systems. For high performance, sorting-hardware based sorting designs are implemented by conventional binary or emerging stochastic computing (SC) approaches. Binary designs are fast and energy-efficient but costly to implement. SC-based designs, on the other hand, are area and power-efficient but slow and energy-hungry. So, the previous studies of the hardware-based sorting further faced scalability issues. In this work, we propose a novel scalable low-cost design for implementing sorting networks. We borrow the concept of SC for the area- and power efficiency but use weighted stochastic bit-streams to address the high latency and energy consumption issue of SC designs. A new lock and swap (LAS) unit is proposed to sort weighted bit-streams. The LAS-based sorting network can determine the result of comparing different input values early and then map the inputs to the corresponding outputs based on shorter weighted bit-streams. Experimental results show that the proposed design approach achieves much better hardware scalability than prior work. Especially, as increasing the number of inputs, the proposed scheme can reduce the energy consumption by about 3.8% - 93% compared to prior binary and SC-based designs.more » « less
-
Analysis of Joint Scheduling and Power Control for Predictable URLLC in Industrial Wireless NetworksWireless networks are being applied in various industrial sectors, and they are posed to support mission-critical industrial IoT applications which require ultra-reliable, low-latency communications (URLLC). Ensuring predictable per-packet communication reliability is a basis of predictable URLLC, and scheduling and power control are two basic enablers. Scheduling and power control, however, are subject to challenges such as harsh environments, dynamic channels, and distributed network settings in industrial IoT. Existing solutions are mostly based on heuristic algorithms or asymptotic analysis of network performance, and there lack field-deployable algorithms for ensuring predictable per-packet reliability. Towards addressing the gap, we examine the cross-layer design of joint scheduling and power control and analyze the associated challenges. We introduce the Perron–Frobenius theorem to demonstrate that scheduling is a must for ensuring predictable communication reliability, and by investigating characteristics of interference matrices, we show that scheduling with close-by links silent effectively constructs a set of links whose required reliability is feasible with proper transmission power control. Given that scheduling alone is unable to ensure predictable communication reliability while ensuring high throughput and addressing fast-varying channel dynamics, we demonstrate how power control can help improve both the reliability at each time instant and throughput in the long-term. Based on the analysis, we propose a candidate framework of joint scheduling and power control, and we demonstrate how this framework behaves in guaranteeing per-packet communication reliability in the presence of wireless channel dynamics of different time scales. Collectively, these findings provide insight into the cross-layer design of joint scheduling and power control for ensuring predictable per-packet reliability in the presence of wireless network dynamics and uncertainties.more » « less