%ALi, Lianjun%ALiu, Lingjia%ABai, Jianan%AChang, Hao-Hsuan%AChen, Hao%AAshdown, Jonathan%AZhang, Jianzhong%AYi, Yang%BJournal Name: IEEE Internet of Things Journal
%D2020%I
%JJournal Name: IEEE Internet of Things Journal
%K
%MOSTI ID: 10161733
%PMedium: X; Size: 1 to 1
%TAccelerating Model Free Reinforcement Learning with Imperfect Model Knowledge in Dynamic Spectrum Access
%XCurrent studies that apply reinforcement learning (RL) to dynamic spectrum access (DSA) problems in wireless communications systems are mainly focusing on model-free RL. However, in practice model-free RL requires large number of samples to achieve good performance making it impractical in real time applications such as DSA. Combining model-free and model-based RL can potentially reduce the sample complexity while achieving similar level of performance as model-free RL as long as the learned model is accurate enough. However, in complex environment the learned model is never perfect. In this paper we combine model-free and model-based reinforcement learning, introduce an algorithm that can work with an imperfectly learned model to accelerate the model-free reinforcement learning. Results show our algorithm achieves higher sample efficiency than standard model-free RL algorithm and Dyna algorithm (a standard algorithm that integrating model-based and model-free RL) with much lower computation complexity than the Dyna algorithm. For the extreme case where the learned model is highly inaccurate, the Dyna algorithm performs even worse than the model-free RL algorithm while our algorithm can still outperform the model-free RL algorithm.
%0Journal Article