For spacebased laser communications, when the mean photon number per received optical pulse is much smaller than one, there is a large gap between communications capacity achievable with a receiver that performs individual pulsebypulse detection, and the quantumoptimal “jointdetection receiver” that acts collectively on long codewordblocks of modulated pulses; an effect often termed “superadditive capacity”. In this paper, we consider the simplest scenario where a large superadditive capacity is known: a pureloss channel with a coherentstate binary phaseshift keyed (BPSK) modulation. The two BPSK states can be mapped conceptually to two nonorthogonal states of a qubit, described by an inner product that is a function of the mean photon number per pulse. Using this map, we derive an explicit construction of the quantum circuit of a jointdetection receiver based on a recent idea of “beliefpropagation with quantum messages” (BPQM). We quantify its performance improvement over the Dolinar receiver that performs optimal pulsebypulse detection, which represents the best “classical” approach. We analyze the scheme rigorously and show that it achieves the quantum limit of minimum average error probability in discriminating 8 (BPSK) codewords of a length5 binary linear code with a tree factor graph. Our result suggests that a BPQMmore »
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Abstract 
Successive cancellation list decoding of polar codes provides very good performance for short to moderate block lengths. However, the list size required to approach the performance of maximumlikelihood decoding is still not well understood theoretically. This work identifies informationtheoretic quantities that are closely related to this required list size. It also provides a natural approximation for these quantities that can be computed efficiently even for very long codes. Simulation results are provided for the binary erasure channel as well as the binaryinput additive white Gaussian noise channel.

The recursive projectionaggregation (RPA) decoding algorithm for ReedMuller (RM) codes was recently introduced by Ye and Abbe. We show that the RPA algorithm is closely related to (weighted) beliefpropagation (BP) decoding by interpreting it as a messagepassing algorithm on a factor graph with redundant code constraints. We use this observation to introduce a novel decoder tailored to highrate RM codes. The new algorithm relies on puncturing rather than projections and is called recursive puncturingaggregation (RXA). We also investigate collapsed (i.e., nonrecursive) versions of RPA and RXA and show some examples where they achieve similar performance with lower decoding complexity.

We consider the problem of estimating a $p$ dimensional vector $\beta$ from $n$ observations $Y=X\beta+W$ , where $\beta_{j}\mathop{\sim}^{\mathrm{i.i.d}.}\pi$ for a realvalued distribution $\pi$ with zero mean and unit variance’ $X_{ij}\mathop{\sim}^{\mathrm{i.i.d}.}\mathcal{N}(0,1)$ , and $W_{i}\mathop{\sim}^{\mathrm{i.i.d}.}\mathcal{N}(0,\ \sigma^{2})$ . In the asymptotic regime where $n/p\rightarrow\delta$ and $p/\sigma^{2}\rightarrow$ snr for two fixed constants $\delta,\ \mathsf{snr}\in(0,\ \infty)$ as $p\rightarrow\infty$ , the limiting (normalized) minimum meansquared error (MMSE) has been characterized by a singleletter (additive Gaussian scalar) channel. In this paper, we show that if the MMSE function of the singleletter channel converges to a step function, then the limiting MMSE of estimating $\beta$ converges to a step function which jumps from 1 to 0 at a critical threshold. Moreover, we establish that the limiting meansquared error of the (MSEoptimal) approximate message passing algorithm also converges to a step function with a larger threshold, providing evidence for the presence of a computationalstatistical gap between the two thresholds.

We consider the weighted beliefpropagation (WBP) decoder recently proposed by Nachmani et al. where different weights are introduced for each Tanner graph edge and optimized using machine learning techniques. Our focus is on simplescaling models that use the same weights across certain edges to reduce the storage and computational burden. The main contribution is to show that simple scaling with few parameters often achieves the same gain as the full parameterization. Moreover, several training improvements for WBP are proposed. For example, it is shown that minimizing average binary crossentropy is suboptimal in general in terms of bit error rate (BER) and a new "softBER" loss is proposed which can lead to better performance. We also investigate parameter adapter networks (PANs) that learn the relation between the signaltonoise ratio and the WBP parameters. As an example, for the (32, 16) ReedMuller code with a highly redundant paritycheck matrix, training a PAN with softBER loss gives nearmaximumlikelihood performance assuming simple scaling with only three parameters.

This paper presents the first proof of polarization for the deletion channel with a constant deletion rate and a regular hiddenMarkov input distribution. A key part of this work involves representing the deletion channel using a trellis and describing the plus and minus polardecoding operations on this trellis. In particular, the plus and minus operations can be seen as combining adjacent trellis stages to yield a new trellis with half as many stages. Using this viewpoint, we prove a weak polarization theorem for standard polar codes on the deletion channel. To achieve strong polarization, we modify this scheme by adding guard bands of repeated zeros between various parts of the codeword. Using this approach, we obtain a scheme whose rate approaches the mutual information and whose probability of error decays exponentially in the cuberoot of the block length.

This paper focuses on the mutual information and minimum meansquared error (MMSE) as a function a matrix valued signaltonoise ratio (SNR) for a linear Gaussian channel with arbitrary input distribution. As shown by Lamarca, the mutualinformation is a concave function of a positive semi definite matrix, which we call the matrix SNR. This implies that the mapping from the matrix SNR to the MMSE matrix is decreasing monotone. Building upon these functional properties, we start to construct a unifying framework that provides a bridge between classical informationtheoretic inequalities, such as the entropy power inequality, and interpolation techniques used in statistical physics and random matrix theory. This framework provides new insight into the structure of phase transitions in coding theory and compressed sensing. In particular, it is shown that the parallel combination of linear channels with freelyindependent matrices can be characterized succinctly via free convolution.