skip to main content


Search for: All records

Creators/Authors contains: "Wesel, Richard"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  2. For a two-variance model of the Flash read channel that degrades as a function of the number of program/erase cycles, this paper demonstrates that selecting write voltages to maximize the minimum page mutual information (MI) can increase device lifetime. In multi-level cell (MLC) Flash memory, one of four voltage levels is written to each cell, according to the values of the most-significant bit (MSB) page and the least-significant bit (LSB) page. In our model, each voltage level is then distorted by signal-dependent additive Gaussian noise that approximates the Flash read channel. When performing an initial read of a page in MLC flash, one (for LSB) or two (for MSB) bits of information are read for each cell of the page. If LDPC decoding fails after the initial read, then an enhanced-precision read is performed. This paper shows that jointly designing write voltage levels and read thresholds to maximize the minimum MI between a page and its associated initial or enhanced-precision read bits can improve LDPC decoding performance. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  3. With a sufficiently large list size, the serial list Viterbi algorithm (S-LVA) provides maximum likelihood (ML) decoding of a concatenated convolutional code (CC) and an expurgating linear function (ELF), which is similar in function to a cyclic redundancy check (CRC), but doesn't enforce that the code be cyclic. However, S-LVA with a large list size requires considerable complexity. This paper exploits linearity to reduce decoding complexity for tail-biting CCs (TBCCs) concatenated with ELFs. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  4. Free-space optical (FSO) links are sensitive to channel fading caused by atmospheric turbulence, varying weather conditions, and changes in the distance between the transmitter and receiver. To mitigate FSO fading, this paper applies linear and quadratic prediction to estimate fading channel conditions and dynamically select the appropriate low-density parity check (LDPC) code rate. This adaptivity achieves reliable communication while efficiently utilizing the available channel mutual information. Protograph-based Raptor-like (PBRL) LDPC codes supporting a wide range of rates are designed, facilitating convenient rate switching. When channel state information (CSI) is known without delay, dynamically selecting LDPC code rate appropriately maximizes throughput. This work explores how such prediction behaves as the feedback delay is increased from no delay to a delay of 4 ms for a channel with a coherence time of 10 ms. 
    more » « less
    Free, publicly-accessible full text available June 9, 2025
  5. Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  6. Recently, neural networks have improved MinSum message-passing decoders for low-density parity-check (LDPC) codes by multiplying or adding weights to the messages, where the weights are determined by a neural network. The neural network complexity to determine distinct weights for each edge is high, often limiting the application to relatively short LDPC codes. Furthermore, storing separate weights for every edge and every iteration can be a burden for hardware implementations. To reduce neural network complexity and storage requirements, this paper proposes a family of weight-sharing schemes that use the same weight for edges that have the same check node degree and/or variable node degree. Our simulation results show that node-degree-based weight-sharing can deliver the same performance requiring distinct weights for each node. This paper also combines these degree-specific neural weights with a reconstruction-computation-quantization (RCQ) decoder to produce a weighted RCQ (W-RCQ) decoder. The W-RCQ decoder with node-degree-based weight sharing has a reduced hardware requirement compared with the original RCQ decoder. As an additional contribution, this paper identifies and resolves a gradient explosion issue that can arise when training neural LDPC decoders. 
    more » « less
    Free, publicly-accessible full text available April 1, 2025
  7. Horstein, Burnashev, Shayevitz and Feder, Naghshvar et al . and others have studied sequential transmission of a k-bit message over the binary symmetric channel (BSC) with full, noiseless feedback using posterior matching. Yang et al . provide an improved lower bound on the achievable rate using martingale analysis that relies on the small-enough difference (SED) partitioning introduced by Naghshvar et al . SED requires a relatively complex encoder and decoder. To reduce complexity, this paper replaces SED with relaxed constraints that admit the small enough absolute difference (SEAD) partitioning rule. The main analytical results show that achievable-rate bounds higher than those found by Yang et al . [2] are possible even under the new constraints, which are less restrictive than SED. The new analysis does not use martingale theory for the confirmation phase and applies a surrogate channel technique to tighten the results. An initial systematic transmission further increases the achievable rate bound. The simplified encoder associated with SEAD has a complexity below order O ( K 2 ) and allows simulations for message sizes of at least 1000 bits. For example, simulations achieve 99% of of the channel’s 0.50-bit capacity with an average block size of 200 bits for a target codeword error rate of 10 -3. 
    more » « less
  8. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  9. This paper uses a mutual-information maximization paradigm to optimize the voltage levels written to cells in a Flash memory. To enable low-latency, each page of Flash memory stores only one coded bit in each Flash memory cell. For example, three-level cell (TL) Flash has three bit channels, one for each of three pages, that together determine which of eight voltage levels are written to each cell. Each Flash page is required to store the same number of data bits, but the various bits stored in the cell typically do not have to provide the same mutual information. A modified version of dynamic-assignment Blahut-Arimoto (DAB) moves the constellation points and adjusts the probability mass function for each bit channel to increase the mutual information of a worst bit channel with the goal of each bit channel providing the same mutual information. The resulting constellation provides essentially the same mutual information to each page while negligibly reducing the mutual information of the overall constellation. The optimized constellations feature points that are neither equally spaced nor equally likely. However, modern shaping techniques such as probabilistic amplitude shaping can provide coded modulations that support such constellations. 
    more » « less
  10. We extend earlier work on the design of convolutional code-specific CRC codes to Q -ary alphabets, with an eye toward Q -ary orthogonal signaling. Starting with distance-spectrum optimal, zero-terminated, Q -ary convolutional codes, we design Q -ary CRC codes so that the CRC/convolutional concatenation is distance-spectrum optimal. The Q -ary code symbols are mapped to a Q -ary orthogonal signal set and sent over an AWGN channel with noncoherent reception. We focus on Q=4 , rate-1/2 convolutional codes in our designs. The random coding union bound and normal approximation are used in earlier works as benchmarks for performance for distance-spectrum-optimal convolutional codes. We derive a saddlepoint approximation of the random coding union bound for the coded noncoherent signaling channel, as well as a normal approximation for this channel, and compare the performance of our codes to these limits. Our best design is within 0.6 dB of the RCU bound at a frame error rate of 10 −4 . 
    more » « less