null
(Ed.)
Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay
improves sample-efficiency by reusing experiences from the past, and convolutional
neural networks (CNNs) process high-dimensional inputs effectively. However,
such techniques demand high memory and computational bandwidth. In this paper,
we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a
simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient
updates in CNNs, we freeze the lower layers of CNN encoders early in training
due to early convergence of their parameters. Additionally, we reduce memory
requirements by storing the low-dimensional latent vectors for experience replay
instead of high-dimensional images, enabling an adaptive increase in the replay
buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while
significantly saving computation and memory across a diverse set of DeepMind
Control environments and Atari games. Finally, we show that SEER is useful for
computation-efficient transfer learning in RL because lower layers of CNNs extract
generalizable features, which can be used for different tasks and domains.
more »
« less