skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Farsighter: Efficient Multi-Step Exploration for Deep Reinforcement Learning [Farsighter: Efficient Multi-Step Exploration for Deep Reinforcement Learning]
Award ID(s):
1901218
PAR ID:
10485259
Author(s) / Creator(s):
;
Publisher / Repository:
SCITEPRESS - Science and Technology Publications
Date Published:
ISBN:
978-989-758-623-1
Page Range / eLocation ID:
380 to 391
Format(s):
Medium: X
Location:
Lisbon, Portugal
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
  2. Thompson sampling (TS) is one of the most popular exploration techniques in reinforcement learning (RL). However, most TS algorithms with theoretical guarantees are difficult to implement and not generalizable to Deep RL. While the emerging approximate sampling-based exploration schemes are promising, most existing algorithms are specific to linear Markov Decision Processes (MDP) with suboptimal regret bounds, or only use the most basic samplers such as Langevin Monte Carlo. In this work, we propose an algorithmic framework that incorporates different approximate sampling methods with the recently proposed Feel-Good Thompson Sampling (FGTS) approach \citep{zhang2022feel,dann2021provably}, which was previously known to be computationally intractable in general. When applied to linear MDPs, our regret analysis yields the best known dependency of regret on dimensionality, surpassing existing randomized algorithms. Additionally, we provide explicit sampling complexity for each employed sampler. Empirically, we show that in tasks where deep exploration is necessary, our proposed algorithms that combine FGTS and approximate sampling perform significantly better compared to other strong baselines. On several challenging games from the Atari 57 suite, our algorithms achieve performance that is either better than or on par with other strong baselines from the deep RL literature. 
    more » « less