skip to main content

Title: An Enhanced Decoding Algorithm for Coded Compressed Sensing with Applications to Unsourced Random Access
Unsourced random access (URA) has emerged as a pragmatic framework for next-generation distributed sensor networks. Within URA, concatenated coding structures are often employed to ensure that the central base station can accurately recover the set of sent codewords during a given transmission period. Many URA algorithms employ independent inner and outer decoders, which can help reduce computational complexity at the expense of a decay in performance. In this article, an enhanced decoding algorithm is presented for a concatenated coding structure consisting of a wide range of inner codes and an outer tree-based code. It is shown that this algorithmic enhancement has the potential to simultaneously improve error performance and decrease the computational complexity of the decoder. This enhanced decoding algorithm is applied to two existing URA algorithms, and the performance benefits of the algorithm are characterized. Findings are supported by numerical simulations.
Authors:
; ; ;
Award ID(s):
2131106 1619085
Publication Date:
NSF-PAR ID:
10340671
Journal Name:
Sensors
Volume:
22
Issue:
2
Page Range or eLocation-ID:
676
ISSN:
1424-8220
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of serving real-time flows over a multi-hop wireless network. Each flow is composed of packets that have strict deadlines, and the goal is to maximize the weighted timely throughput of the system. Consistent with recent developments using mm-wave communications, we assume that the links are directional, but are lossy, and have unknown probabilities of successful packet transmission. An average link utilization budget (similar to a power constraint) constrains the system. We pose the problem in the form of a Constrained Markov Decision Process (CMDP) with an unknown transition kernel. We use a duality approach to decompose the problem into an inner unconstrained MDP with link usage costs, and an outer linkcost update step. For the inner MDP, we develop modelbased reinforcement learning algorithms that sample links by sending packets to learn the link statistics. While the first algorithm type samples links at will at the beginning and constructs the model, the second type is an online approach that can only use packets from flows to sample links that they traverse. The approach to the outer problem follows gradient descent. We characterize the sample complexity (number of packets transmitted) to obtain near-optimal policies, to show that amore »basic online approach has a poorer sample complexity bound, it can be modified to obtain an online algorithm that has excellent empirical performance.« less
  2. In this paper, a method for joint source-channel coding (JSCC) based on concatenated spatially coupled low-density parity-check (SC-LDPC) codes is investigated. A construction consisting of two SC-LDPC codes is proposed: one for source coding and the other for channel coding, with a joint belief propagation-based decoder. Also, a novel windowed decoding (WD) scheme is presented with significantly reduced latency and complexity requirements. The asymptotic behavior for various graph node degrees is analyzed using a protograph-based Extrinsic Information Transfer (EXIT) chart analysis for both LDPC block codes with block decoding and for SC-LDPC codes with the WD scheme, showing robust performance for concatenated SC-LDPC codes. Simulation results show a notable performance improvement compared to existing state-of-the-art JSCC schemes based on LDPC codes with comparable latency and complexity constraints.
  3. Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decodermore »in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.« less
  4. We consider autocorrelation-based low-complexity decoders for identifying Binary Chirp codewords from noisy signals in N = 2^m dimensions. The underlying algebraic structure enables dimensionality reduction from N complex to m binary dimensions, which can be used to reduce decoding complexity, when decoding is successively performed in the m binary dimensions. Existing low-complexity decoders suffer from poor performance in scenarios with strong noise. This is problematic especially in a vector quantization scenario, where quantization noise power cannot be controlled in the system. We construct two improvements to existing algorithms; a geometrically inspired algorithm based on successive projections, and an algorithm based on adaptive decoding order selection. When combined with a breadth-first list decoder, these algorithms make it possible to approach the performance of exhaustive search with low complexity.
  5. We consider the problem of coded distributed computing where a large linear computational job, such as a matrix multiplication, is divided into $k$ smaller tasks, encoded using an $(n,k)$ linear code, and performed over $n$ distributed nodes. The goal is to reduce the average execution time of the computational job. We provide a connection between the problem of characterizing the average execution time of a coded distributed computing system and the problem of analyzing the error probability of codes of length $n$ used over erasure channels. Accordingly, we present closed-form expressions for the execution time using binary random linear codes and the best execution time any linear-coded distributed computing system can achieve. It is also shown that there exist good binary linear codes that attain, asymptotically, the best performance any linear code, not necessarily binary, can achieve. We also investigate the performance of coded distributed computing systems using polar and Reed-Muller (RM) codes that can benefit from low-complexity decoding, and superior performance, respectively, as well as explicit constructions. The proposed framework in this paper can enable efficient designs of distributed computing systems given the rich literature in the channel coding theory.