skip to main content


Search for: All records

Award ID contains: 1955660

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  2. For a two-variance model of the Flash read channel that degrades as a function of the number of program/erase cycles, this paper demonstrates that selecting write voltages to maximize the minimum page mutual information (MI) can increase device lifetime. In multi-level cell (MLC) Flash memory, one of four voltage levels is written to each cell, according to the values of the most-significant bit (MSB) page and the least-significant bit (LSB) page. In our model, each voltage level is then distorted by signal-dependent additive Gaussian noise that approximates the Flash read channel. When performing an initial read of a page in MLC flash, one (for LSB) or two (for MSB) bits of information are read for each cell of the page. If LDPC decoding fails after the initial read, then an enhanced-precision read is performed. This paper shows that jointly designing write voltage levels and read thresholds to maximize the minimum MI between a page and its associated initial or enhanced-precision read bits can improve LDPC decoding performance. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  3. Free-space optical (FSO) links are sensitive to channel fading caused by atmospheric turbulence, varying weather conditions, and changes in the distance between the transmitter and receiver. To mitigate FSO fading, this paper applies linear and quadratic prediction to estimate fading channel conditions and dynamically select the appropriate low-density parity check (LDPC) code rate. This adaptivity achieves reliable communication while efficiently utilizing the available channel mutual information. Protograph-based Raptor-like (PBRL) LDPC codes supporting a wide range of rates are designed, facilitating convenient rate switching. When channel state information (CSI) is known without delay, dynamically selecting LDPC code rate appropriately maximizes throughput. This work explores how such prediction behaves as the feedback delay is increased from no delay to a delay of 4 ms for a channel with a coherence time of 10 ms. 
    more » « less
    Free, publicly-accessible full text available June 9, 2025
  4. Horstein, Burnashev, Shayevitz and Feder, Naghshvar et al . and others have studied sequential transmission of a k-bit message over the binary symmetric channel (BSC) with full, noiseless feedback using posterior matching. Yang et al . provide an improved lower bound on the achievable rate using martingale analysis that relies on the small-enough difference (SED) partitioning introduced by Naghshvar et al . SED requires a relatively complex encoder and decoder. To reduce complexity, this paper replaces SED with relaxed constraints that admit the small enough absolute difference (SEAD) partitioning rule. The main analytical results show that achievable-rate bounds higher than those found by Yang et al . [2] are possible even under the new constraints, which are less restrictive than SED. The new analysis does not use martingale theory for the confirmation phase and applies a surrogate channel technique to tighten the results. An initial systematic transmission further increases the achievable rate bound. The simplified encoder associated with SEAD has a complexity below order O ( K 2 ) and allows simulations for message sizes of at least 1000 bits. For example, simulations achieve 99% of of the channel’s 0.50-bit capacity with an average block size of 200 bits for a target codeword error rate of 10 -3. 
    more » « less
  5. This paper introduces a mutual information (MI) maximization paradigm that adapts the locations and probabilities of write levels to iteratively increase the mutual information of the weakest bit channel and hence improve the reliability of its corresponding page. In this way, we seek a constellation of write levels that delivers the same amount of mutual information to the bit channel for each page, so that all pages are equally reliable. For simplicity, we consider the example of TLC Flash with an additive white Gaussian noise (AWGN) channel model, but the principle may be applied to denser cells and more realistic channel models. 
    more » « less
  6. This paper presents a method for computing a finite-blocklength converse for the rate of fixed-length codes with feedback used on discrete memoryless channels (DMCs). The new converse is expressed in terms of a stochastic control problem whose solution can be efficiently computed using dynamic programming and Fourier methods. For channels such as the binary symmetric channel (BSC) and binary erasure channel (BEC), the accuracy of the proposed converse is similar to that of existing special-purpose converse bounds, but the new converse technique can be applied to arbitrary DMCs. We provide example applications of the new converse technique to the binary asymmetric channel (BAC) and the quantized amplitude-constrained AWGN channel. 
    more » « less
  7. In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with m optimal decoding times for the binary-input additive white Gaussian noise channel. We first develop tight approximations to the tail probability of length-n cumulative information density. Building on the work of Yavas et al., for a given information density threshold, we formulate the integer program of minimizing the upper bound on average blocklength over all decoding times subject to the average error probability, minimum gap and integer constraints. Eventually, minimization of locally optimal upper bounds over all thresholds yields the globally minimum upper bound and the above method is called the two-step minimization. Relaxing to allow positive real-valued decoding times activates the gap constraint. We develop gap-constrained sequential differential optimization (SDO) procedure to find the optimal, gap-constrained, real-valued decoding times. In the error regime of practical interest, Polyanskiy's scheme of stopping at zero does not help. In this region, the achievability bounds estimated by the two-step minimization and gap-constrained SDO show that Polyanskiy’s achievability bound for VLSF codes can be approached with a small number of decoding times. 
    more » « less
  8. This paper compares the accuracy and complexity of Raghavan and Baum’s Reliability Output Viterbi Algorithm (ROVA), Polyanskiy’s accumulated information density (AID), and Fricke and Hoeher’s lower complexity approximation of ROVA. It turns out that AID is far less accurate than ROVA in practice. This paper proposes codeword information density (CID), which modifies AID to improve its accuracy and leads to a lower-complexity implementation of ROVA. The paper includes an analytical expression for the random variable describing the correct decoding probability computed by ROVA and uses this expression to characterize how the probabilities of correct decoding, undetected error, and negative acknowledgement behave as a function of the selected threshold for reliable decoding. This paper examines both the complexity and the simulation time of ROVA, CID, AID, and the Fricke and Hoeher approximation to ROVA. This paper also derives an expression for the union bound on the frame error rate for zero-terminated trellis codes with punctured symbols and uses it to optimize the order of symbol transmission in an incremental retransmission scheme. This paper concludes by comparing the performance of an incremental retransmission scheme using ROVA as a stopping condition to one that uses a CRC as a stopping condition. 
    more » « less
  9. Traditional communication systems transmit a codeword only after all message bits are available at the transmitter. This paper joins Guo & Kostina and Lalitha et al. in developing approaches for causal encoding, where the transmitter may begin transmitting codeword symbols as soon as the first message bit arrives. Building on the posterior matching encoders of Horstein, Shayevitz & Feder, and Naghshvar et al., this paper extends our computationally efficient systematic encoder to progressively encode using only the message bits that are causally available. Systematic codes work well with posterior matching on a channel with feedback, and they provide an immediate benefit when causal encoding is employed instead of traditional encoding. Our algorithm captures additional gains in the interesting region where the transmission rate μ is higher than the source rate λ at which message bits become available. In this region, we improve performance further through the transmission of additional, non- systematic symbols before a traditional encoder would have even begun transmission. 
    more » « less