In this paper, we analyze applicability of singleand
two-hidden-layer feed-forward artificial neural networks,
SLFNs and TLFNs, respectively, in decoding linear block
codes. Based on the provable capability of SLFNs and TLFNs
to approximate discrete functions, we discuss sizes of the
network capable to perform maximum likelihood decoding.
Furthermore, we propose a decoding scheme, which use
artificial neural networks (ANNs) to lower the error-floors of
low-density parity-check (LDPC) codes. By learning a small
number of error patterns, uncorrectable with typical decoders
of LDPC codes, ANN can lower the error-floor by an order of
magnitude, with only marginal average complexity incense.
more »
« less
Efficient Computation of Viterbi Decoder Reliability with an Application to Variable-Length Coding
This paper compares the accuracy and complexity of Raghavan and Baum’s Reliability Output Viterbi Algorithm (ROVA), Polyanskiy’s accumulated information density (AID), and Fricke and Hoeher’s lower complexity approximation of ROVA. It turns out that AID is far less accurate than ROVA in practice. This paper proposes codeword information density (CID), which modifies AID to improve its accuracy and leads to a lower-complexity implementation of ROVA. The paper includes an analytical expression for the random variable describing the correct decoding probability computed by ROVA and uses this expression to characterize how the probabilities of correct decoding, undetected error, and negative acknowledgement behave as a function of the selected threshold for reliable decoding. This paper examines both the complexity and the simulation time of ROVA, CID, AID, and the Fricke and Hoeher approximation to ROVA. This paper also derives an expression for the union bound on the frame error rate for zero-terminated trellis codes with punctured symbols and uses it to optimize the order of symbol transmission in an incremental retransmission scheme. This paper concludes by comparing the performance of an incremental retransmission scheme using ROVA as a stopping condition to one that uses a CRC as a stopping condition.
more »
« less
- PAR ID:
- 10345043
- Date Published:
- Journal Name:
- IEEE Transactions on Communications
- ISSN:
- 0090-6778
- Page Range / eLocation ID:
- 1 to 1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with m optimal decoding times for the binary-input additive white Gaussian noise channel. We first develop tight approximations to the tail probability of length-n cumulative information density. Building on the work of Yavas et al., for a given information density threshold, we formulate the integer program of minimizing the upper bound on average blocklength over all decoding times subject to the average error probability, minimum gap and integer constraints. Eventually, minimization of locally optimal upper bounds over all thresholds yields the globally minimum upper bound and the above method is called the two-step minimization. Relaxing to allow positive real-valued decoding times activates the gap constraint. We develop gap-constrained sequential differential optimization (SDO) procedure to find the optimal, gap-constrained, real-valued decoding times. In the error regime of practical interest, Polyanskiy's scheme of stopping at zero does not help. In this region, the achievability bounds estimated by the two-step minimization and gap-constrained SDO show that Polyanskiy’s achievability bound for VLSF codes can be approached with a small number of decoding times.more » « less
-
Artificial Intelligence (AI) has permeated various domains but is limited by the bottlenecks imposed by data transfer latency inherent in contemporary memory technologies. Matrix multiplication, crucial for neural network training and inference, can be significantly expedited with a complexity of O(1) using Resistive RAM (RRAM) technology, instead of the conventional complexity of O(n2). This positions RRAM as a promising candidate for the efficient hardware implementation of machine learning and neural networks through in-memory computation. However, RRAM manufacturing technology remains in its infancy, rendering it susceptible to soft errors, potentially compromising neural network accuracy and reliability. In this paper, we propose a syndrome-based error correction scheme that employs selective weighted checksums to correct double adjacent column errors in RRAM. The error correction is done on the output of the matrix multiplication thus ensuring correct operation for any number of errors in two adjacent columns. The proposed codes have low redundancy and low decoding latency, making it suitable for high throughput applications. This schemeuses a repeating weight based structure that makes it scalable to large RRAM matrix sizes.more » « less
-
The new 5G communications standard increases data rates and supports low-latency communication that places constraints on the computational complexity of channel decoders. 5G low-density parity-check (LDPC) codes have the so-called protograph-based raptor-like (PBRL) structure which offers inherent rate-compatibility and excellent performance. Practical LDPC decoder implementations use message-passing decoding with finite precision, which becomes coarse as complexity is more severely constrained. Performance degrades as the precision becomes more coarse. Recently, the information bottleneck (IB) method was used to design mutual-information-maximizing lookup tables that replace conventional finite-precision node computations. The IB approach exchanges messages represented by integers with very small bit width. This paper extends the IB principle to the flexible class of PBRL LDPC codes as standardized in 5G. The extensions include puncturing and rate-compatible IB decoder design. As an example of the new approach, a 4-bit information bottleneck decoder is evaluated for PBRL LDPC codes over a typical range of rates. Frame error rate simulations show that the proposed scheme outperforms offset min-sum decoding algorithms and operates very close to double-precision sum-product belief propagation decoding.more » « less
-
Meder, F. (Ed.)As neural networks have become increasingly prolific solutions to modern problems in science and engineering, there has been a congruent rise in the popularity of the numerical machine learning techniques used to design them. While numerical methods are highly generalizable, they also tend to produce unintuitive networks with inscrutable behavior. One solution to the problem of network interpretability is to use analytical design techniques, but these methods are relatively underdeveloped compared to their numerical alternatives. To increase the utilization of analytical techniques and eventually facilitate the symbiotic integration of both design strategies, it is necessary to improve the efficacy of analytical methods on fundamental function approximation tasks that can be used to perform more complex operations. Toward this end, this manuscript extends the design constraints of the addition and subtraction subnetworks of the functional subnetwork approach (FSA) to arbitrarily many inputs, and then derives new constraints for an alternative neural encoding/decoding scheme. This encoding/decoding scheme involves storing information in the activation ratio of a subnetwork’s neurons, rather than directly in their membrane voltages. We show that our new “relative” encoding/decoding scheme has both qualitative and quantitative advantages compared to the existing “absolute” encoding/decoding scheme, including helping to mitigate saturation and improving approximation accuracy. Our relative encoding scheme will be extended to other functional subnetworks in future work to assess its advantages on more complex operations.more » « less