skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Fault Tolerant Triplet Networks for Training and Inference
This paper deals with the fault tolerance of Triplet Networks (TNs). Results based on extensive analysis and simulation by fault injection are presented for new schemes. As in accordance with technical literature, stuck-at faults are considered in the fault model for the training process. Simulation by fault injection shows that the TNs are not sensitive to this type of fault in the general case; however, an unexcepted failure (leading to network convergence to false solutions) can occur when the faults are in the negative subnetwork. Analysis for this specific case is provided and remedial solutions are proposed (namely the use of the loss function with regularized anchor outputs for stuck-at 0 faults and a modified margin for stuck-at 1/-1 faults). Simulation proves that false solutions can be very efficiently avoided by utilizing the proposed techniques. Random bit-flip faults are then considered in the fault model for the inference process. This paper analyzes the error caused by bit-flips on different bit positions in a TN with Floating-Point (FP) format and compares it with a fault- tolerant Stochastic Computing (SC) implementation. Analysis and simulation of the TNs confirm that the main degradation is caused by bit-flips on the exponent bits. Therefore, protection schemes are proposed to handle those errors; they replace least significant bits of the FP numbers with parity bits for both single- and multi-bit errors. The proposed methods achieve superior performance compared to other low-cost fault tolerant schemes found in the technical literature by reducing the classification accuracy loss of TNs by 96.76% (97.74%) for single-bit (multi-bit errors).   more » « less
Award ID(s):
1953980
PAR ID:
10588097
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
TechRxiv
Date Published:
Format(s):
Medium: X
Institution:
TechRxiv
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose AccHashtag, the first framework for high-accuracy detection of fault-injection attacks on Deep Neural Networks (DNNs) with provable bounds on detection performance. Recent literature in fault-injection attacks shows the severe DNN accuracy degradation caused by bit flips. In this scenario, the attacker changes a few DNN weight bits during execution by injecting faults to the dynamic random-access memory (DRAM). To detect bit flips, AccHashtag extracts a unique signature from the benign DNN prior to deployment. The signature is used to validate the model’s integrity and verify the inference output on the fly. We propose a novel sensitivity analysis that identifies the most vulnerable DNN layers to the fault-injection attack. The DNN signature is constructed by encoding the weights in vulnerable layers using a low-collision hash function. During DNN inference, new hashes are extracted from the target layers and compared against the ground-truth signatures. AccHashtag incorporates a lightweight methodology that allows for real-time fault detection on embedded platforms. We devise a specialized compute core for AccHashtag on field-programmable gate arrays (FPGAs) to facilitate online hash generation in parallel to DNN execution. Extensive evaluations with the state-of-the-art bit-flip attack on various DNNs demonstrate the competitive advantage of AccHashtag in terms of both attack detection and execution overhead. 
    more » « less
  2. Motivated by the rise of quantum computers, existing public-key cryptosystems are expected to be replaced by post-quantum schemes in the next decade in billions of devices. To facilitate the transition, NIST is running a standardization process which is currently in its final Round. Only three digital signature schemes are left in the competition, among which Dilithium and Falcon are the ones based on lattices. Besides security and performance, significant attention has been given to resistance against implementation attacks that target side-channel leakage or fault injection response. Classical fault attacks on signature schemes make use of pairs of faulty and correct signatures to recover the secret key which only works on deterministic schemes. To counter such attacks, Dilithium offers a randomized version which makes each signature unique, even when signing identical messages. In this work, we introduce a novel Signature Correction Attack which not only applies to the deterministic version but also to the randomized version of Dilithium and is effective even on constant-time implementations using AVX2 instructions. The Signature Correction Attack exploits the mathematical structure of Dilithium to recover the secret key bits by using faulty signatures and the public-key. It can work for any fault mechanism which can induce single bit-flips. For demonstration, we are using Rowhammer induced faults. Thus, our attack does not require any physical access or special privileges, and hence could be also implemented on shared cloud servers. Using Rowhammer attack, we inject bit flips into the secret key s1 of Dilithium, which results in incorrect signatures being generated by the signing algorithm. Since we can find the correct signature using our Signature Correction algorithm, we can use the difference between the correct and incorrect signatures to infer the location and value of the flipped bit without needing a correct and faulty pair. To quantify the reduction in the security level, we perform a thorough classical and quantum security analysis of Dilithium and successfully recover 1,851 bits out of 3,072 bits of secret key $$s_{1}$$ for security level 2. Fully recovered bits are used to reduce the dimension of the lattice whereas partially recovered coefficients are used to to reduce the norm of the secret key coefficients. Further analysis for both primal and dual attacks shows that the lattice strength against quantum attackers is reduced from 2128 to 281 while the strength against classical attackers is reduced from 2141 to 289. Hence, the Signature Correction Attack may be employed to achieve a practical attack on Dilithium (security level 2) as proposed in Round 3 of the NIST post-quantum standardization process. 
    more » « less
  3. Information is an integral part of the correct and reliable operation of today's computing systems. Data either stored or provided as input to computation processing modules must be tolerant to many externally and internally induced destructive phenomena such as soft errors and faults, often of a transient nature but also in large numbers, thus causing catastrophic system failures. Together with error tolerance, reliable operation must be provided by reducing the large overheads often encountered at system-level when employing redundancy. While information-based techniques can also be used in some of these schemes, the complexity and limited capabilities for implementing high order correction functions for decoding limit their application due to poor performance; therefore, N Modular Redundancy (NMR) is often employed. In NMR the correct output is given by majority voting among the N input copies of data. Reduced Precision Redundancy (RPR) has been advocated to reduce the redundancy, mostly for the case of N = 3; in a 3RPR scheme, one full precision (FP) input is needed while two inputs require reduced precision (RP) (usually by truncating some of the least significant bits (LSBs) in the input data). However, its decision logic is more complex than a 3MR scheme. This paper proposes a novel NRPR scheme with a simple comparison-based approach; the realistic case of N = 5 is considered as an example to explain in detail such proposed scheme; different arrangements for the redundancy (with three or four FP data copies) are considered. In addition to the design of the decision circuit, a probabilistic analysis is also pursued to determine the conditions by which RPR data is provided as output; it is shown that its probability is very small. Different applications of the proposed NRPR system are presented; in these applications, data is used either as memory output and/or for computing the discrete cosine transform. In both cases, the proposed 5RPR scheme shows considerable advantages in terms of redundancy management and reliable image processing. 
    more » « less
  4. In this paper, a probabilistic interpolation recoder (PIR) circuit is developed for deep belief networks (DBNs) with probabilistic spin logic (p-bit)-based neurons. To verify the functionality and evaluate the performance of the PIRs, we have implemented a 784 × 200 × 10 DBN circuit in SPICE for a pattern recognition application using the MNIST dataset. The PIR circuits are leveraged in the last hidden layer to interpolate the probabilistic output of the neurons, which are representing different output classes, through sampling the p-bit’s output values and then counting them in a defined sampling time window. The PIR circuit is proposed as an alternative for conventional interpolation methods which were based on using a resistor capacitor tank to integrate each neuron’s output, followed by an analog-to-digital converter to generate the digital output. The circuit simulation results of PIR circuit exhibit at least 54%, 81%, and 78% reductions in power, energy, and energy-error-product, respectively, compared to previous techniques, without using any of the area-consuming analog components in the interpolation circuit. In addition, PIR circuits provide an inherent single stuck at fault tolerant feature to mitigate both transient and permanent faults at the circuit’s output. Reliability properties of the PIR circuits for single stuck-at faults are shown to be enhanced relative to conventional interpolation without requiring hardware redundancy. 
    more » « less
  5. Hardware faults are a known source of security vulnerabilities. Fault injection in secure embedded systems leads to information leakage and privilege escalation, and countless fault attacks have been demonstrated both in simulation and in practice. However, there is a significant gap between simulated fault attacks and physical fault attacks. Simulations use idealized fault models such as single-bit flips with uniform distribution. These ideal fault models may not hold in practice. On the other hand, practical experiments lack the white-box visibility necessary to determine the true nature of the fault, leading to probabilistic vulnerability assessments and unexplained results. In embedded software, this problem is further exacerbated by the layered abstractions between the hardware (where the fault originates) and the application software (where the fault effect is observed). We present FaultDetective, a method to investigate the root-cause of fault injection from fault detection in software. Our main insight is that fault detection in software is only the end-point of a chain of events that starts with a fault manifestation in hardware and propagates through the micro-architecture and architecture before reaching the software level. To understand the fault effects at the hardware level, we use a scan chain, a low-level hardware test structure. We then use white-box simulation to propagate and observe hardware faults in the embedded software. We efficiently visualize the fault propagation across abstraction levels using a hash-tree representation of the scan chain. We implement this concept in a multi-core MSP430 micro-controller that redundantly executes an application in lock-step. With this setup, we observe the fault effects for several different stressors, including clock glitching and thermal laser stimulation, and explain the root-cause in each case. 
    more » « less