skip to main content


Title: Efficient Maximum-Likelihood Decoding for TBCC and CRC-TBCC Codes via Parallel List Viterbi
Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures.  more » « less
Award ID(s):
2008918
NSF-PAR ID:
10510788
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-2611-6
Page Range / eLocation ID:
1 to 5
Subject(s) / Keyword(s):
Tail-Biting Convolutional Codes, Convolutional Codes, Cyclic Redundancy Check, List Viterbi Decoding, List Decoding
Format(s):
Medium: X
Location:
Brest, France
Sponsoring Org:
National Science Foundation
More Like this
  1. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S = 2^v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S^2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S. This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  2. Recently, rate-1/ω zero-terminated and tail-biting convolutional codes (ZTCCs and TBCCs) with cyclic-redundancy-check (CRC)-aided list decoding have been shown to closely approach the random-coding union (RCU) bound for short blocklengths. This paper designs CRCs for rate-(ω−1)/ω CCs with short blocklengths, considering both the ZT and TB cases. The CRC design seeks to optimize the frame error rate (FER) performance of the code resulting from the concatenation of the CRC and the CC. Utilization of the dual trellis proposed by Yamada et al. lowers the complexity of CRC-aided serial list Viterbi decoding (SLVD) of ZTCCs and TBCCs. CRC-aided SLVD of the TBCCs closely approaches the RCU bound at a blocklength of 128. 
    more » « less
  3. With a sufficiently large list size, the serial list Viterbi algorithm (S-LVA) provides maximum likelihood (ML) decoding of a concatenated convolutional code (CC) and an expurgating linear function (ELF), which is similar in function to a cyclic redundancy check (CRC), but doesn't enforce that the code be cyclic. However, S-LVA with a large list size requires considerable complexity. This paper exploits linearity to reduce decoding complexity for tail-biting CCs (TBCCs) concatenated with ELFs. 
    more » « less
  4. Tal, Ido (Ed.)
    Recently, rate-1/ n zero-terminated (ZT) and tail-biting (TB) convolutional codes (CCs) with cyclic redundancy check (CRC)-aided list decoding have been shown to closely approach the random-coding union (RCU) bound for short blocklengths. This paper designs CRC polynomials for rate-( n - 1)/ n ZT and TB CCs with short blocklengths. This paper considers both standard rate-( n -1)/ n CC polynomials and rate-( n - 1)/ n designs resulting from puncturing a rate-1/2 code. The CRC polynomials are chosen to maximize the minimum distance d min and minimize the number of nearest neighbors A dmin . For the standard rate-( n - 1)/ n codes, utilization of the dual trellis proposed by Yamada et al . lowers the complexity of CRC-aided serial list Viterbi decoding (SLVD). CRC-aided SLVD of the TBCCs closely approaches the RCU bound at a blocklength of 128. This paper compares the FER performance (gap to the RCU bound) and complexity of the CRC-aided standard and punctured ZTCCs and TBCCs. This paper also explores the complexity-performance trade-off for three TBCC decoders: a single-trellis approach, a multi-trellis approach, and a modified single-trellis approach with pre-processing using the wrap around Viterbi algorithm. 
    more » « less
  5. Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction. 
    more » « less