Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The recursive projection-aggregation (RPA) decoding algorithm for Reed-Muller (RM) codes was recently introduced by Ye and Abbe. We show that the RPA algorithm is closely related to (weighted) belief-propagation (BP) decoding by interpreting it as a message-passing algorithm on a factor graph with redundant code constraints. We use this observation to introduce a novel decoder tailored to high-rate RM codes. The new algorithm relies on puncturing rather than projections and is called recursive puncturing-aggregation (RXA). We also investigate collapsed (i.e., non-recursive) versions of RPA and RXA and show some examples where they achieve similar performance with lower decoding complexity.more » « less
-
Discriminating between quantum states is a fundamental task in quantum information theory. Given two quantum states, ρ+ and ρ− , the Helstrom measurement distinguishes between them with minimal probability of error. However, finding and experimentally implementing the Helstrom measurement can be challenging for quantum states on many qubits. Due to this difficulty, there is a great interest in identifying local measurement schemes which are close to optimal. In the first part of this work, we generalize previous work by Acin et al. (Phys. Rev. A 71, 032338) and show that a locally greedy (LG) scheme using Bayesian updating can optimally distinguish between any two states that can be written as a tensor product of arbitrary pure states. We then show that the same algorithm cannot distinguish tensor products of mixed states with vanishing error probability (even in a large subsystem limit), and introduce a modified locally-greedy (MLG) scheme with strictly better performance. In the second part of this work, we compare these simple local schemes with a general dynamic programming (DP) approach. The DP approach finds the optimal series of local measurements and optimal order of subsystem measurement to distinguish between the two tensor-product states.more » « less
-
We consider the weighted belief-propagation (WBP) decoder recently proposed by Nachmani et al. where different weights are introduced for each Tanner graph edge and optimized using machine learning techniques. Our focus is on simple-scaling models that use the same weights across certain edges to reduce the storage and computational burden. The main contribution is to show that simple scaling with few parameters often achieves the same gain as the full parameterization. Moreover, several training improvements for WBP are proposed. For example, it is shown that minimizing average binary cross-entropy is suboptimal in general in terms of bit error rate (BER) and a new "soft-BER" loss is proposed which can lead to better performance. We also investigate parameter adapter networks (PANs) that learn the relation between the signal-to-noise ratio and the WBP parameters. As an example, for the (32, 16) Reed-Muller code with a highly redundant parity-check matrix, training a PAN with soft-BER loss gives near-maximum-likelihood performance assuming simple scaling with only three parameters.more » « less