skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2008918

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the short blocklength regime, serial list decoding of tail-biting (TB) convolutional codes concatenated with an expurgating linear function (ELF) can approach the random coding union bound on frame error rate (FER) performance. Decoding complexity for a particular received word depends on how deep in the list the decoder must search to find a valid TB-ELF codeword. The average list size is close to one at low-FER operating points such as 10^−6, and serial list decoding provides a favorable average complexity compared to other decoders with similar performance for these cases. However, the average list size can be on the order of a hundred or a thousand at higher, but still practically important, FER operating points such as 10−3. It is useful to study the tradeoff between how deep the decoder is willing to search and the proximity to the frame error rate (FER) achieved by an ML decoder. Often, this tradeoff is framed in terms of a maximum list depth. However, this paper frames the tradeoff in terms of a maximum allowable metric between the received word and the trellis paths on the list. We consider metrics of Euclidean distance and angle. This new approach draws on the wealth of existing literature on bounded-metric decoding to provide characterizations of how the choice of maximum allowable metric controls the tradeoffs between FER performance and both decoding complexity and undetected error rate. These characterizations lead to an example of an ELF-TB convolutional code that outperforms recent results for polar codes in terms of the lowest SNR that simultaneously achieves both a total error rate less than T = 10^−3 and an undetected error rate below U = 10^−5. 
    more » « less
  2. This paper proposes a syndrome sphere decoding (SSD) algorithm. SSD achieves the frame error rate (FER) of maximum likelihood (ML) decoding for a tail-biting convolutional code (TBCC) concatenated with an expurgating linear function (ELF) with significantly lower average and maximum decoding complexity than serial list Viterbi decoding (SLVD). SSD begins by using a Viterbi decoder to find the closest trellis path, which may not be tail-biting and may not satisfy the ELF. This trellis path has a syndrome comprised of the difference between the received ELF and the ELF computed from the received message and the difference between the beginning and ending state of the trellis path. This syndrome is used to find all valid tail-biting codewords that satisfy the ELF constraint and lie within a specified distance from the closest trellis path. The proposed algorithm avoids the complexity of SLVD at the cost of a large table containing all the offsets needed for each syndrome. A hybrid decoder combines SSD and SLVD with a fixed maximum list size, balancing the maximum list size for SLVD against the size of the SSD offset table. 
    more » « less
  3. Convolutional codes are widely used in many applications. The encoders can be implemented with a simple circuit. Decoding is often accomplished by the Viterbi algorithm or the maximum a-posteriori decoder of Bahl et al. These algorithms are sequential in nature, requiring a decoding time proportional to the message length. For low latency applications this this latency might be problematic. This paper introduces a low-latency decoder for tail-biting convolutional codes TBCCs that processes multiple trellis stages in parallel. The new decoder is designed for hardware with parallel processing capabilities. The overall decoding latency is proportional to the log of the message length. The new decoding architecture is modified into a list decoder, and the list decoding performance can be enhanced by exploiting linearity to expand the search space. Certain modifications to standard TBCCs are supported by the new architecture and improve frame error rate performance. 
    more » « less
    Free, publicly-accessible full text available March 10, 2026
  4. The Consultative Committee for Space Data Systems (CCSDS) standard for high photon efficiency uses a serially-concatenated (SC) code to encode pulse position modulated laser light. A convolutional encoder serves as the outer code and an accumulator serves as the inner code. These two component codes are connected through an interleaver. This coding scheme is called Serially Concatenated convolutionally coded Pulse Position Modulation (SCPPM) and it is used for NASA's Deep Space Optical Communications (DSOC) experiment. For traditional decoding that traverses the trellis forwards and backwards according to the Bahl Cocke Jelinek and Raviv (BCJR) algorithm, the latency is on the order of the length of the trellis, which has 10,080 stages for the rate 2/3 DSOC code. This paper presents a novel alternative approach that simultaneously processes all trellis stages, successively combining pairs of stages into a meta-stage. This approach has latency that is on the order of the log base-2 of the number of stages. The new decoder is implemented using the Compute Unified Device Architecture (CUDA) platform on an Nvidia Graphics Processing Unit (GPU). Compared to Field Programmable Gate Array (FPGA) implementations, the GPU implementation offers easier development, scalability, and portability across GPU models. The GPU implementation provides a dramatic increase in speed that facilitates more thorough simulation as well as enables a shift from FPGA to GPU processors for DSOC ground stations. 
    more » « less
    Free, publicly-accessible full text available March 1, 2026
  5. Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction. 
    more » « less
  6. With a sufficiently large list size, the serial list Viterbi algorithm (S-LVA) provides maximum likelihood (ML) decoding of a concatenated convolutional code (CC) and an expurgating linear function (ELF), which is similar in function to a cyclic redundancy check (CRC), but doesn't enforce that the code be cyclic. However, S-LVA with a large list size requires considerable complexity. This paper exploits linearity to reduce decoding complexity for tail-biting CCs (TBCCs) concatenated with ELFs. 
    more » « less
  7. List Viterbi decoders are a very effective way to improve the performance of block codes in combination with an error detection outer code. In this work, we combine an efficient serial list Viterbi decoder design with an existing serially concatenated, convolutionally-encoded, pulse position modulated code (SCPPM) used in space communication, that exhibits poor performance because of an error floor. The SCPPM code features a 32-bit CRC that provides powerful error detection capability and an outer four-state convolutional code that makes it suitable for a list Viterbi decoder. The system’s code is very long, consisting of 15, 120 bits, which renders a high complexity decoder impractical, while the high error detection allows for a list decoder with very low undetected error probability. We use a very efficient list Viterbi decoder algorithm to avoid most of the redundant operations to produce low complexity serial list Viterbi decoder. The combined system reduces the error floor, moderately for the original version of the system, and completely suppresses it when the code length is increased to four times longer. 
    more » « less
  8. An expurgating linear function (ELF) is an outer code that disallows low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve can efficiently identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides analytical distance spectrum union (DSU) bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of K = 64 bits or constant codeword blocklength of N = 152 bits, an ELF can reduce the gap at CER 10−6 between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. List Viterbi decoding guided by the ELF achieves maximum likelihood (ML) decoding of the concatenated code with a sufficiently large list size. The rate-K/(K+m) ELF outer code reduces rate and list decoding increases decoder complexity. As SNR increases, the average list size converges to 1 and average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword. 
    more » « less
  9. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  10. This paper applies probabilistic amplitude shaping (PAS) to cyclic redundancy check (CRC)-aided tail-biting trellis-coded modulation (TCM). CRC-TCM-PAS produces practical codes for short block lengths on the additive white Gaussian noise (AWGN) channel. In the transmitter, equally likely message bits are encoded by a distribution matcher (DM) generating amplitude symbols with a desired distribution. A CRC is appended to the sequence of amplitude symbols, and this sequence is then encoded and modulated by TCM to produce real-valued channel input signals. This paper proves that the sign values produced by the TCM are asymptotically equally likely to be positive or negative. The CRC-TCM-PAS scheme can thus generate channel input symbols with a symmetric capacity-approaching probability mass function. The paper provides an analytical upper bound on the frame error rate of the CRC-TCM-PAS system over the AWGN channel. This FER upper bound is the objective function used for jointly optimizing the CRC and convolutional code. Additionally, this paper proposes a multi-composition DM, which is a collection of multiple constant-composition DMs. The optimized CRC-TCM-PAS systems achieve frame error rates below the random coding union (RCU) bound in AWGN and outperform the short-blocklength PAS systems with various other forward error correction codes studied in [2]. 
    more » « less