skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Title: Finite-Support Capacity-Approaching Distributions for AWGN Channels
Previously, dynamic-assignment Blahut-Arimoto (DAB) was used to find capacity-achieving probability mass functions (PMFs) for binomial channels and molecular channels. As it turns out, DAB can efficiently identify capacity-achieving PMFs for a wide variety of channels. This paper applies DAB to power-constrained (PC) additive white Gaussian Noise (AWGN) Channels and amplitude-constrained (AC) AWGN Channels.This paper modifies DAB to include a power constraint and finds low-cardinality PMFs that approach capacity on PC-AWGN Channels. While a continuous Gaussian PDF is well-known to be capacity-achieving on the PC-AWGN channel, DAB identifies low-cardinality PMFs within 0.01 bits of the mutual information provided by a Gaussian PDF. Recall the results of Ozarow and Wyner requiring a constellation cardinality of ⌈2 ^ (C+1) ⌉ to approach capacity C to within the asymptotic shaping loss of 1.53 dB at high SNR. PMF's found by DAB approach capacity with essentially no shaping loss with cardinality less than 2 ^ (C+1.2) . As expected, DAB's numerical approach identifies PMFs with better mutual information vs. SNR performance than the analytical approaches to finite-support constellations examined by Wu and Verdu. This paper also uses DAB to find capacity-achieving PMFs with small cardinality support sets for AC-AWGN Channels. The resulting evolution of capacity-achieving PMFs as a function of SNR is consistent with the approximate cardinality transition points of Sharma and Shamai.  more » « less
Award ID(s):
1911166
PAR ID:
10214910
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 IEEE Information Theory Workshop (ITW)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Low-capacity scenarios have become increasingly important in the technology of the In- ternet of Things (IoT) and the next generation of wireless networks. Such scenarios require efficient and reliable transmission over channels with an extremely small capacity. Within these constraints, the state-of-the-art coding techniques may not be directly applicable. More- over, the prior work on the finite-length analysis of optimal channel coding provides inaccurate predictions of the limits in the low-capacity regime. In this paper, we study channel coding at low capacity from two perspectives: fundamental limits at finite length and code construc- tions. We first specify what a low-capacity regime means. We then characterize finite-length fundamental limits of channel coding in the low-capacity regime for various types of channels, including binary erasure channels (BECs), binary symmetric channels (BSCs), and additive white Gaussian noise (AWGN) channels. From the code construction perspective, we charac- terize the optimal number of repetitions for transmission over binary memoryless symmetric (BMS) channels, in terms of the code blocklength and the underlying channel capacity, such that the capacity loss due to the repetition is negligible. Furthermore, it is shown that capacity- achieving polar codes naturally adopt the aforementioned optimal number of repetitions. 
    more » « less
  2. This paper focuses on the mutual information and minimum mean-squared error (MMSE) as a function a matrix- valued signal-to-noise ratio (SNR) for a linear Gaussian channel with arbitrary input distribution. As shown by Lamarca, the mutual-information is a concave function of a positive semi- definite matrix, which we call the matrix SNR. This implies that the mapping from the matrix SNR to the MMSE matrix is decreasing monotone. Building upon these functional properties, we start to construct a unifying framework that provides a bridge between classical information-theoretic inequalities, such as the entropy power inequality, and interpolation techniques used in statistical physics and random matrix theory. This framework provides new insight into the structure of phase transitions in coding theory and compressed sensing. In particular, it is shown that the parallel combination of linear channels with freely-independent matrices can be characterized succinctly via free convolution. 
    more » « less
  3. Despite the substantial success of deep learning for modulation classification, models trained on a specific transmitter configuration and channel model often fail to generalize well to other scenarios with different transmitter configurations, wireless fading channels, or receiver impairments such as clock offset. This paper proposes Contrastive Learning with Self- Reconstruction called CLSR-AMC to learn good representations of signals resilient to channel changes. While contrastive loss focuses on the differences between individual modulations, the reconstruction loss captures representative features of the signal. Additionally, we develop three data augmentation operators to emulate the impact of channel and hardware impairments without exhaustive modeling of different channel profiles. We perform extensive experimentation with commonly used datasets. We show that CLSR-AMC outperforms its counterpart based on contrastive learning for the same amount of labeled data by significant average accuracy gains of 24.29%, 17.01%, and 15.97% in Additive White Gaussian Noise (AWGN), Rayleigh+AWGN, and Rician+AWGN channels, respectively. 
    more » « less
  4. This paper uses a mutual-information maximization paradigm to optimize the voltage levels written to cells in a Flash memory. To enable low-latency, each page of Flash memory stores only one coded bit in each Flash memory cell. For example, three-level cell (TL) Flash has three bit channels, one for each of three pages, that together determine which of eight voltage levels are written to each cell. Each Flash page is required to store the same number of data bits, but the various bits stored in the cell typically do not have to provide the same mutual information. A modified version of dynamic-assignment Blahut-Arimoto (DAB) moves the constellation points and adjusts the probability mass function for each bit channel to increase the mutual information of a worst bit channel with the goal of each bit channel providing the same mutual information. The resulting constellation provides essentially the same mutual information to each page while negligibly reducing the mutual information of the overall constellation. The optimized constellations feature points that are neither equally spaced nor equally likely. However, modern shaping techniques such as probabilistic amplitude shaping can provide coded modulations that support such constellations. 
    more » « less
  5. An algorithm is proposed to encode low-density parity-check (LDPC) codes into codewords with a non-uniform distribution. This enables power-efficient signalling for asymmetric channels. We show gains of 0.9 dB for additive white Gaussian noise (AWGN) channels with on-off keying modulation using 5G LDPC codes. 
    more » « less