skip to main content

Search for: All records

Creators/Authors contains: "Zhang, Jianzhong"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we introduce a neural network (NN)-based symbol detection scheme for Wi-Fi systems and its associated hardware implementation in software radios. To be specific, reservoir computing (RC), a special type of recurrent neural network (RNN), is adopted to conduct the task of symbol detection for Wi-Fi receivers. Instead of introducing extra training overhead/set to facilitate the RC-based symbol detection, a new training framework is introduced to take advantage of the signal structure in existing Wi-Fi protocols (e.g., IEEE 802.11 standards), that is, the introduced RC-based symbol detector will utilize the inherent long/short training sequences and structured pilots sent by the Wi-Fi transmitter to conduct online learning of the transmit symbols. In other words, our introduced NN-based symbol detector does not require any additional training sets compared to existing Wi-Fi systems. The introduced RC-based Wi-Fi symbol detector is implemented on the software-defined radio (SDR) platform to further provide realistic and meaningful performance comparisons against the traditional Wi-Fi receiver. Over the air, experiment results show that the introduced RC based Wi-Fi symbol detector outperforms conventional Wi-Fi symbol detection methods in various environments indicating the significance and the relevance of our work.
  2. Current studies that apply reinforcement learning (RL) to dynamic spectrum access (DSA) problems in wireless communications systems are mainly focusing on model-free RL. However, in practice model-free RL requires large number of samples to achieve good performance making it impractical in real time applications such as DSA. Combining model-free and model-based RL can potentially reduce the sample complexity while achieving similar level of performance as model-free RL as long as the learned model is accurate enough. However, in complex environment the learned model is never perfect. In this paper we combine model-free and model-based reinforcement learning, introduce an algorithm that can work with an imperfectly learned model to accelerate the model-free reinforcement learning. Results show our algorithm achieves higher sample efficiency than standard model-free RL algorithm and Dyna algorithm (a standard algorithm that integrating model-based and model-free RL) with much lower computation complexity than the Dyna algorithm. For the extreme case where the learned model is highly inaccurate, the Dyna algorithm performs even worse than the model-free RL algorithm while our algorithm can still outperform the model-free RL algorithm.