null
(Ed.)
Most prior approaches to offline reinforcement learning (RL) have taken an iterative
actor-critic approach involving off-policy evaluation. In this paper we show that
simply doing one step of constrained/regularized policy improvement using an
on-policy Q estimate of the behavior policy performs surprisingly well. This
one-step algorithm beats the previously reported results of iterative algorithms on
a large portion of the D4RL benchmark. The simple one-step baseline achieves
this strong performance without many of the tricks used by previously proposed
iterative algorithms and is more robust to hyperparameters. We argue that the
relatively poor performance of iterative approaches is a result of the high variance
inherent in doing off-policy evaluation and magnified by the repeated optimization
of policies against those high-variance estimates. In addition, we hypothesize
that the strong performance of the one-step algorithm is due to a combination of
favorable structure in the environment and behavior policy.
more »
« less