skip to main content


Title: Capacity Analysis of Opportunistic Channel Bonding Over Multi-Channel WLANs Under Unsaturated Traffic
Award ID(s):
1816908
NSF-PAR ID:
10189208
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Transactions on Communications
Volume:
68
Issue:
3
ISSN:
0090-6778
Page Range / eLocation ID:
1552 to 1566
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The fading broadcast channel (BC) with additive white Gaussian noise (AWGN) channel, channel output feedback (COF) and channel state information (CSI) is considered. Perfect CSI is available at the receivers, and unit delayed CSI along with COF at the transmitter. Under the assumption of memoryless fading, a posterior matching scheme that incorporates the additional CSI feedback into the coding scheme is presented. With COF, the achievable rates depend on the joint distribution of the fading process. Numerical examples show that the capacity region of two-user fading AWGN-BC is enlarged by COF. The coding scheme is however suboptimal since some parts of the achievable rate region are outperformed by superposition coding without COF. 
    more » « less
  2. Feature spaces in the deep layers of convolutional neural networks (CNNs) are often very high-dimensional and difficult to inter-pret. However, convolutional layers consist of multiple channels that are activated by different types of inputs, which suggests that more insights may be gained by studying the channels and how they relate to each other. In this paper, we first analyze theoretically channel-wise non-negative kernel (CW-NNK) regression graphs, which allow us to quantify the overlap between channels and, indirectly, the intrinsic dimension of the data representation manifold. We find that redundancy between channels is significant and varies with the layer depth and the level of regularization during training. Additionally, we observe that there is a correlation between channel overlap in the last convolutional layer and generalization performance. Our experimental results demonstrate that these techniques can lead to a better understanding of deep representations. 
    more » « less