This paper introduces a neural polar decoder (NPD) for deletion channels with a constant deletion rate. Existing polar decoders for deletion channels exhibit high computational complexity of O(N4), where N is the block length. This limits the application of polar codes for deletion channels to short-to-moderate block lengths. In this work, we demonstrate that employing NPDs for deletion channels can reduce the computational complexity. First, we extend the architecture of the NPD to support deletion channels. Specifically, the NPD architecture consists of four neural networks (NNs), each replicating fundamental successive cancellation (SC) decoder operations. To support deletion channels, we change the architecture of only one. The computational complexity of the NPD is O(ANlogN), where the parameter A represents a computational budget determined by the user and is independent of the channel. We evaluate the new extended NPD for deletion channels with deletion rates δ∈{0.01,0.1} and we verify the NPD with the ground truth given by the trellis decoder by Tal et al. We further show that due to the reduced complexity of the NPD, we are able to incorporate list decoding and further improve performance. We believe that the extended NPD presented here could have applications in future technologies like DNA storage.
more »
« less
Polar Codes for the Deletion Channel: Weak and Strong Polarization
This paper presents the first proof of polarization for the deletion channel with a constant deletion rate and a regular hidden-Markov input distribution. A key part of this work involves representing the deletion channel using a trellis and describing the plus and minus polar-decoding operations on this trellis. In particular, the plus and minus operations can be seen as combining adjacent trellis stages to yield a new trellis with half as many stages. Using this viewpoint, we prove a weak polarization theorem for standard polar codes on the deletion channel. To achieve strong polarization, we modify this scheme by adding guard bands of repeated zeros between various parts of the codeword. Using this approach, we obtain a scheme whose rate approaches the mutual information and whose probability of error decays exponentially in the cube-root of the block length.
more »
« less
- PAR ID:
- 10173122
- Date Published:
- Journal Name:
- 2019 IEEE International Symposium on Information Theory (ISIT)
- Page Range / eLocation ID:
- 1362 to 1366
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The Consultative Committee for Space Data Systems (CCSDS) standard for high photon efficiency uses a serially-concatenated (SC) code to encode pulse position modulated laser light. A convolutional encoder serves as the outer code and an accumulator serves as the inner code. These two component codes are connected through an interleaver. This coding scheme is called Serially Concatenated convolutionally coded Pulse Position Modulation (SCPPM) and it is used for NASA's Deep Space Optical Communications (DSOC) experiment. For traditional decoding that traverses the trellis forwards and backwards according to the Bahl Cocke Jelinek and Raviv (BCJR) algorithm, the latency is on the order of the length of the trellis, which has 10,080 stages for the rate 2/3 DSOC code. This paper presents a novel alternative approach that simultaneously processes all trellis stages, successively combining pairs of stages into a meta-stage. This approach has latency that is on the order of the log base-2 of the number of stages. The new decoder is implemented using the Compute Unified Device Architecture (CUDA) platform on an Nvidia Graphics Processing Unit (GPU). Compared to Field Programmable Gate Array (FPGA) implementations, the GPU implementation offers easier development, scalability, and portability across GPU models. The GPU implementation provides a dramatic increase in speed that facilitates more thorough simulation as well as enables a shift from FPGA to GPU processors for DSOC ground stations.more » « less
-
In this work, a novel data-driven methodology for designing polar codes is proposed. The methodology is suitable for the case where the channel is given as a ”black-box” and the designer has access to the channel for generating observations of its inputs and outputs, but does not have access to the explicit channel model. The methodology consists of two components: (1) a neural estimation of the sufficient statistic of the channel outputs using recent advances in Kullback Leibler (KL) estimation, and (2) a neural successive cancellation (NSC) decoder using three neural networks that replace the core elements of the successive cancellation (SC) decoder. The parameters of the neural networks are determined during a training phase where the mutual information of the effective channels is estimated. We demonstrate the performance of the algorithm on memoryless channels and on finite state channels. Then, we compare the results with the optimal decoding given by the SC and SC trellis decoders, respectively.more » « less
-
Trellis based detection with pattern dependent noise prediction (PDNP) has become standard practice in the HDD industry. In a typical single-track signal processing scheme, the received samples from the read head are first filtered by a linear equalizer with a 1D partial response (PR). The linear filter output flows into a trellis-based (e.g. BCJR) detector that employs a super-trellis based on the PR mask ISI channel and a 1D pattern dependent noise prediction (1D PDNP) algorithm. The effective channel model has a media noise term which models signal dependent noise due to, e.g., magnetic grains intersected by bit boundaries. The media noise can influence two or more bit readback values. The trellis detector sends soft estimates of the coded bits to a channel decoder, which outputs estimates of the user information bits. There are two problems with traditional PDNP. First, when the number of tracks Nt simultaneously processed is greater than one, e.g. in two-dimensional magnetic recording (TDMR), the number of trellis states can become impractically large; this is the state explosion problem. Second, the relatively simple autoregressive noise model and linear prediction used in PDNP is somewhat restrictive and may not accurately represent the media noise, especially at high storage densities; this is the modeling problem. To address the state explosion problem, we separate the ISI detection and media noise prediction into two separate detectors and use the turbo-principle to exchange information between them, thus avoiding use of a super-trellis. To address the modeling problem, we design and train deep neural network (DNN) based media noise predictors. As DNN models are much more general than autoregressive models, they give a more accurate model of magnetic media noise than PDNP; this more accurate modeling results in reduced detector BERs compared to PDNP.more » « less
-
Polarization is an unprecedented coding technique in that it not only achieves channel capacity, but also does so at a faster speed of convergence than any other technique. This speed is measured by the "scaling exponent" and its importance is three-fold. Firstly, estimating the scaling exponent is challenging and demands a deeper understanding of the dynamics of communication channels. Secondly, scaling exponents serve as a benchmark for different variants of polar codes that helps us select the proper variant for real-life applications. Thirdly, the need to optimize for the scaling exponent sheds light on how to reinforce the design of polar code. In this paper, we generalize the binary erasure channel (BEC), the simplest communication channel and the protagonist of many polar code studies, to the "tetrahedral erasure channel" (TEC). We then invoke Mori-Tanaka’s 2 × 2 matrix over 𝔽_4 to construct polar codes over TEC. Our main contribution is showing that the dynamic of TECs converges to an almost-one-parameter family of channels, which then leads to an upper bound of 3.328 on the scaling exponent. This is the first non-binary matrix whose scaling exponent is upper-bounded. It also polarizes BEC faster than all known binary matrices up to 23 × 23 in size. Our result indicates that expanding the alphabet is a more effective and practical alternative to enlarging the matrix in order to achieve faster polarization.more » « less
An official website of the United States government

