skip to main content


Title: ReiNN: Efficient error resilience in artificial neural networks using encoded consistency checks
In this research, a low cost error detection and correction approach is developed for multilayer perceptron networks, where checker neurons are used to encode hidden layer functions using independent training experiments. Error detection and correction is predicated on validating consistency properties of the encoded checks and shows that high coverage of injected errors can be achieved with extremely low computational overhead.  more » « less
Award ID(s):
1723997
NSF-PAR ID:
10098272
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
European Test Symposium
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Advances in quantum computing have urged the need for cryptographic algorithms that are low-power, low-energy, and secure against attacks that can be potentially enabled. For this post-quantum age, different solutions have been studied. Code-based cryptography is one feasible solution whose hardware architectures have become the focus of research in the NIST standardization process and has been advanced to the final round (to be concluded by 2022–2024). Nevertheless, although these constructions, e.g., McEliece and Niederreiter public key cryptography, have strong error correction properties, previous studies have proved the vulnerability of their hardware implementations against faults product of the environment and intentional faults, i.e., differential fault analysis. It is previously shown that depending on the codes used, i.e., classical or reduced (using either quasi-dyadic Goppa codes or quasi-cyclic alternant codes), flaws in error detection could be observed. In this work, efficient fault detection constructions are proposed for the first time to account for such shortcomings. Such schemes are based on regular parity, interleaved parity, and two different cyclic redundancy checks (CRC), i.e., CRC-2 and CRC-8. Without losing the generality, we experiment on the McEliece variant, noting that the presented schemes can be used for other code-based cryptosystems. We perform error detection capability assessments and implementations on field-programmable gate array Kintex-7 device xc7k70tfbv676-1 to verify the practicality of the presented approaches. To demonstrate the appropriateness for constrained embedded systems, the performance degradation and overheads of the presented schemes are assessed. 
    more » « less
  2. Energy-efficient bitcoin mining cores have gained significant attention since the energy cost for computing dominates the mining expenses [1]. Ultra-low-voltage (ULV) digital circuits have emerged as an attractive approach to improve the energy-efficiency. However, they demand a large timing margin for the worst-case process, voltage, and temperature (PVT) variations, undermining a significant portion of energy savings. Recent works, including multi-phase latch pipeline [1], tunable replica circuits [2]–[3], in-situ error detection and correction (EDAC) [4]–[6], and dynamic timing enhancement [7], can reduce the pessimistic margin. However, it is not straightforward to adopt those techniques in mining cores due to their deeply-pipelined architecture (up to 128 stages [1]). For example, to adopt EDAC, the deep pipeline requires inserting many bulky error detectors as it has many critical paths. Our experiment with a 0.3V 28-nm mining core shows >18.9% registers need to be replaced with error detectors, considering 6σ local process variation only. Also, multiple stages can have timing errors simultaneously, making an error correction process (e.g., clock gating [5], VDD boosting [6]) complex and costly. 
    more » « less
  3. Deep learning techniques have been widely adopted in daily life with applications ranging from face recognition to recommender systems. The substantial overhead of conventional error tolerance techniques precludes their widespread use, while approaches involving median filtering and invariant generation rely on alterations to DNN training that may be difficult to achieve for larger networks on larger datasets. To address this issue, this paper presents a novel approach taking advantage of the statistics of neuron output gradients to identify and suppress erroneous neuron values. By using the statistics of neurons’ gradients with respect to their neighbors, tighter statistical thresholds are obtained compared to the use of neuron output values alone. This approach is modular and is combined with accurate, low-overhead error detection methods to ensure it is used only when needed, further reducing its cost. Deep learning models can be trained using standard methods and our error correction module is fit to a trained DNN, achieving comparable or superior performance compared to baseline error correction methods while incurring comparable hardware overhead without needing to modify DNN training or utilize specialized hardware architectures. 
    more » « less
  4. Deep learning techniques have been widely adopted in daily life with applications ranging from face recognition to recommender systems. The substantial overhead of conventional error tolerance techniques precludes their widespread use, while approaches involving median filtering and invariant generation rely on alterations to DNN training that may be difficult to achieve for larger networks on larger datasets. To address this issue, this paper presents a novel approach taking advantage of the statistics of neuron output gradients to identify and suppress erroneous neuron values. By using the statistics of neurons’ gradients with respect to their neighbors, tighter statistical thresholds are obtained compared to the use of neuron output values alone. This approach is modular and is combined with accurate, low-overhead error detection methods to ensure it is used only when needed, further reducing its cost. Deep learning models can be trained using standard methods and our error correction module is fit to a trained DNN, achieving comparable or superior performance compared to baseline error correction methods while incurring comparable hardware overhead without needing to modify DNN training or utilize specialized hardware architectures. 
    more » « less
  5. ABSTRACT

    Accurately accounting for spectral structure in spectrometer data induced by instrumental chromaticity on scales relevant for detection of the 21-cm signal is among the most significant challenges in global 21-cm signal analysis. In the publicly available Experiment to Detect the Global Epoch of Reionization Signature low-band data set, this complicating structure is suppressed using beam-factor-based chromaticity correction (BFCC), which works by dividing the data by a sky-map-weighted model of the spectral structure of the instrument beam. Several analyses of these data have employed models that start with the assumption that this correction is complete. However, while BFCC mitigates the impact of instrumental chromaticity on the data, given realistic assumptions regarding the spectral structure of the foregrounds, the correction is only partial. This complicates the interpretation of fits to the data with intrinsic sky models (models that assume no instrumental contribution to the spectral structure of the data). In this paper, we derive a BFCC data model from an analytical treatment of BFCC and demonstrate using simulated observations that, in contrast to using an intrinsic sky model for the data, the BFCC data model enables unbiased recovery of a simulated global 21-cm signal from beam-factor chromaticity-corrected data in the limit that the data are corrected with an error-free beam-factor model.

     
    more » « less