In this research, a low cost error detection and
correction approach is developed for multilayer perceptron
networks, where checker neurons are used to encode hidden
layer functions using independent training experiments. Error
detection and correction is predicated on validating consistency
properties of the encoded checks and shows that high coverage of
injected errors can be achieved with extremely low computational
overhead.
more »
« less
Error Resilient Neuromorphic Networks Using Checker Neurons
The last decade has seen tremendous advances in
the application of artificial neural networks to solving problems
that mimic human intelligence. Many of these systems are
implemented using traditional digital compute engines where
errors can occur during memory accesses or during numerical
computation. While such networks are inherently error
resilient, specific errors can result in incorrect decisions. This
work develops a low overhead error detection and correction
approach for multilayer artificial neural networks, here the
hidden layer functions are approximated using checker neurons.
Experimental results show that a high coverage of injected errors
can be achieved with extremely low computational overhead
using consistency properties of the encoded checks. A key side
benefit is that the checks can flag errors when the network is
presented outlier data that do not correspond to data with which
the network is trained to operate.
more »
« less
- Award ID(s):
- 1723997
- PAR ID:
- 10098271
- Date Published:
- Journal Name:
- International On-Line Testing Symposium
- Page Range / eLocation ID:
- 135 to 138
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The reliability of emerging neuromorphic compute fabrics is of great concern due to their widespread use in critical data-intensive applications. Ensuring such reliability is difficult due to the intensity of underlying computations (billions of parameters), errors induced by low power operation and the complex relationship between errors in computations and their effect on network performance accuracy. We study the problem of designing error-resilient neuromorphic systems where errors can stem from: (a) soft errors in computation of matrix-vector multiplications and neuron activations, (b) malicious trojan and adversarial security attacks and (c) effects of manufacturing process variations on analog crossbar arrays that can affect DNN accuracy. The core principle of error detection relies on embedded predictive neuron checks using invariants derived from the statistics of nominal neuron activation patterns of hidden layers of a neural network. Algorithmic encodings of hidden neuron function are also used to derive invariants for checking. A key contribution is designing checks that are robust to the inherent nonlinearity of neuron computations with minimal impact on error detection coverage. Once errors are detected, they are corrected using probabilistic methods due to the difficulties involved in exact error diagnosis in such complex systems. The technique is scalable across soft errors as well as a range of security attacks. The effects of manufacturing process variations are handled through the use of compact tests from which DNN performance can be assessed using learning techniques. Experimental results on a variety of neuromorphic test systems: DNNs, spiking networks and hyperdimensional computing are presented.more » « less
-
Artificial Intelligence (AI) has permeated various domains but is limited by the bottlenecks imposed by data transfer latency inherent in contemporary memory technologies. Matrix multiplication, crucial for neural network training and inference, can be significantly expedited with a complexity of O(1) using Resistive RAM (RRAM) technology, instead of the conventional complexity of O(n2). This positions RRAM as a promising candidate for the efficient hardware implementation of machine learning and neural networks through in-memory computation. However, RRAM manufacturing technology remains in its infancy, rendering it susceptible to soft errors, potentially compromising neural network accuracy and reliability. In this paper, we propose a syndrome-based error correction scheme that employs selective weighted checksums to correct double adjacent column errors in RRAM. The error correction is done on the output of the matrix multiplication thus ensuring correct operation for any number of errors in two adjacent columns. The proposed codes have low redundancy and low decoding latency, making it suitable for high throughput applications. This schemeuses a repeating weight based structure that makes it scalable to large RRAM matrix sizes.more » « less
-
The materials’ consolidation, especially ceramics, is very important in advanced research development and industrial technologies. Science of sintering with all incoming novelties is the base of all these processes. A very important question in all of this is how to get the more precise structure parameters within the morphology of different ceramic materials. In that sense, the advanced procedure in collecting precise data in submicro-processes is also in direction of advanced miniaturization. Our research, based on different electrophysical parameters, like relative capacitance, breakdown voltage, and [Formula: see text], has been used in neural networks and graph theory successful applications. We extended furthermore our neural network back propagation (BP) on sintering parameters’ data. Prognosed mapping we can succeed if we use the coefficients, implemented by the training procedure. In this paper, we continue to apply the novelty from the previous research, where the error is calculated as a difference between the designed and actual network output. So, the weight coefficients contribute in error generation. We used the experimental data of sintered materials’ density, measured and calculated in the bulk, and developed possibility to calculate the materials’ density inside of consolidated structures. The BP procedure here is like a tool to come down between the layers, with much more precise materials’ density, in the points on morphology, which are interesting for different microstructure developments and applications. We practically replaced the errors’ network by density values, from ceramic consolidation. Our neural networks’ application novelty is successfully applied within the experimental ceramic material density [Formula: see text] [kg/m 3 ], confirming the direction way to implement this procedure in other density cases. There are many different mathematical tools or tools from the field of artificial intelligence that can be used in such or similar applications. We choose to use artificial neural networks because of their simplicity and their self-improvement process, through BP error control. All of this contributes to the great improvement in the whole research and science of sintering technology, which is important for collecting more efficient and faster results.more » « less
-
In this paper, we analyze applicability of singleand two-hidden-layer feed-forward artificial neural networks, SLFNs and TLFNs, respectively, in decoding linear block codes. Based on the provable capability of SLFNs and TLFNs to approximate discrete functions, we discuss sizes of the network capable to perform maximum likelihood decoding. Furthermore, we propose a decoding scheme, which use artificial neural networks (ANNs) to lower the error-floors of low-density parity-check (LDPC) codes. By learning a small number of error patterns, uncorrectable with typical decoders of LDPC codes, ANN can lower the error-floor by an order of magnitude, with only marginal average complexity incense.more » « less