This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates.
more »
« less
Source Coding with Unreliable Side Information in the Finite Blocklength Regime
This paper studies a special case of the problem of source coding with side information. A single transmitter describes a source to a receiver that has access to a side information observation that is unavailable at the transmitter. While the source and true side information sequences are dependent, stationary, memoryless random processes, the side information observation at the decoder is unreliable, which here means that it may or may not equal the intended side information and therefore may or may not be useful for decoding the source description. The probability of side information observation failure, caused, for example, by a faulty sensor or source decoding error, is non-vanishing but is bounded by a fixed constant independent of the blocklength. This paper proposes a coding system that uses unreliable side information to get efficient source representation subject to a fixed error probability bound. Results include achievability and converse bounds under two different models of the joint distribution of the source, the intended side information, and the side information observation.
more »
« less
- Award ID(s):
- 1817241
- PAR ID:
- 10354715
- Date Published:
- Journal Name:
- 2022 IEEE International Symposium on Information Theory (ISIT)
- Page Range / eLocation ID:
- 222 to 227
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In the quantum compression scheme proposed by Schumacher, Alice compresses a message that Bob decompresses. In that approach, there is some probability of failure and, even when successful, some distortion of the state. For sufficiently large blocklengths, both of these imperfections can be made arbitrarily small while achieving a compression rate that asymp- totically approaches the source coding bound. However, direct implementation of Schumacher compression suffers from poor circuit complexity. In this paper, we consider a slightly different approach based on classical syndrome source coding. The idea is to use a linear error-correcting code and treat the state to be compressed as a superposition of error patterns. Then, Alice can use quantum gates to apply the parity-check matrix to her message state. This will convert it into a superposition of syndromes. If the original superposition was supported on correctable errors (e.g., coset leaders), then this process can be reversed by decoding. An implementation of this based on polar codes is described and simulated. As in classical source coding based on polar codes, Alice maps the information into the “frozen” qubits that constitute the syndrome. To decompress, Bob utilizes a quantum version of successive cancellation coding.more » « less
-
This article considers the massive MIMO unsourced random access problem on a quasi-static Rayleigh fading channel. Given a fixed message length and a prescribed number of channel uses, the objective is to construct a coding scheme that minimizes the energy-per-bit subject to a fixed probability of error. The proposed scheme differs from other state-of-the-art schemes in that it blends activity detection, single-user coding, pilot-aided and temporary decisions-aided iterative channel estimation and decoding, minimum-mean squared error (MMSE) estimation, and successive interference cancellation (SIC). We show that an appropriate combination of these ideas can substantially outperform state-of-the-art coding schemes when the number of active users is more than 100, making this the best performing scheme known for this regime.more » « less
-
As sensing and instrumentation play an increasingly important role in systems controlled over wired and wireless networks, the need to better understand delay-sensitive communication becomes a prime issue. Along these lines, this article studies the operation of data links that employ incremental redundancy as a practical means to protect information from the effects of unreliable channels. Specifically, this work extends a powerful methodology termed sequential differential optimization to choose near-optimal block sizes for hybrid ARQ over erasure channels. Furthermore, results show that the impact of the coding strategy adopted and the propensity of the channel to erase symbols naturally decouple when analyzing throughput. Overall, block size selection is motivated by normal approximations on the probability of decoding success at every stage of the incremental transmission process. This novel perspective, which rigorously bridges hybrid ARQ and coding, offers a pragmatic means to select code rates and blocklengths for incremental redundancy.more » « less
-
Abstract In practical quantum error correction implementations, the measurement of syndrome information is an unreliable step—typically modeled as a binary measurement outcome flipped with some probability. However, the measured syndrome is in fact a discretized value of the continuous voltage or current values obtained in the physical implementation of the syndrome extraction. In this paper, we use this “soft” or analog information to benefit iterative decoders for decoding quantum low-density parity-check (QLDPC) codes. Syndrome-based iterative belief propagation decoders are modified to utilize the soft syndrome to correct both data and syndrome errors simultaneously. We demonstrate the advantages of the proposed scheme not only in terms of comparison of thresholds and logical error rates for quasi-cyclic lifted-product QLDPC code families but also with faster convergence of iterative decoders. Additionally, we derive hardware (FPGA) architectures of these soft syndrome decoders and obtain similar performance in terms of error correction to the ideal models even with reduced precision in the soft information. The total latency of the hardware architectures is about 600 ns (for the QLDPC codes considered) in a 20 nm CMOS process FPGA device, and the area overhead is almost constant—less than 50% compared to min-sum decoders with noisy syndromes.more » « less
An official website of the United States government

