%AZhu, Yi%ADong, Jing%ALam, Henry%BJournal Name: Operations Research %D2023%I %JJournal Name: Operations Research %K %MOSTI ID: 10401412 %PMedium: X %TUncertainty Quantification and Exploration for Reinforcement Learning %XWe investigate statistical uncertainty quantification for reinforcement learning (RL) and its implications in exploration policy. Despite ever-growing literature on RL applications, fundamental questions about inference and error quantification, such as large-sample behaviors, appear to remain quite open. In this paper, we fill in the literature gap by studying the central limit theorem behaviors of estimated Q-values and value functions under various RL settings. In particular, we explicitly identify closed-form expressions of the asymptotic variances, which allow us to efficiently construct asymptotically valid confidence regions for key RL quantities. Furthermore, we utilize these asymptotic expressions to design an effective exploration strategy, which we call Q-value-based Optimal Computing Budget Allocation (Q-OCBA). The policy relies on maximizing the relative discrepancies among the Q-value estimates. Numerical experiments show superior performances of our exploration strategy than other benchmark policies. Funding: This work was supported by the National Science Foundation (1720433). %0Journal Article