skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Nonasymptotic Analysis of Monte Carlo Tree Search
In this work, we consider the popular tree-based search strategy within the framework of reinforcement learning, the Monte Carlo tree search (MCTS), in the context of the infinite-horizon discounted cost Markov decision process (MDP). Although MCTS is believed to provide an approximate value function for a given state with enough simulations, the claimed proof of this property is incomplete. This is because the variant of MCTS, the upper confidence bound for trees (UCT), analyzed in prior works, uses “logarithmic” bonus term for balancing exploration and exploitation within the tree-based search, following the insights from stochastic multiarm bandit (MAB) literature. In effect, such an approach assumes that the regret of the underlying recursively dependent nonstationary MABs concentrates around their mean exponentially in the number of steps, which is unlikely to hold, even for stationary MABs. As the key contribution of this work, we establish polynomial concentration property of regret for a class of nonstationary MABs. This in turn establishes that the MCTS with appropriate polynomial rather than logarithmic bonus term in UCB has a claimed property. Interestingly enough, empirically successful approaches use a similar polynomial form of MCTS as suggested by our result. Using this as a building block, we argue that MCTS, combined with nearest neighbor supervised learning, acts as a “policy improvement” operator; that is, it iteratively improves value function approximation for all states because of combining with supervised learning, despite evaluating at only finitely many states. In effect, we establish that to learn an ε approximation of the value function with respect to [Formula: see text] norm, MCTS combined with nearest neighbor requires a sample size scaling as [Formula: see text], where d is the dimension of the state space. This is nearly optimal because of a minimax lower bound of [Formula: see text], suggesting the strength of the variant of MCTS we propose here and our resulting analysis.  more » « less
Award ID(s):
1955997
PAR ID:
10381247
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Operations Research
ISSN:
0030-364X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Monte-Carlo planning, as exemplified by Monte-Carlo Tree Search (MCTS), has demonstrated remarkable performance in applications with finite spaces. In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics. We introduce POLY-HOOT , an algorithm that augments MCTS with a continuous armed bandit strategy named Hierarchical Optimistic Optimization (HOO) (Bubeck et al., 2011). Specifically, we enhance HOO by using an appropriate polynomial, rather than logarithmic, bonus term in the upper confidence bounds. Such a polynomial bonus is motivated by its empirical successes in AlphaGo Zero (Silver et al., 2017b), as well as its significant role in achieving theoretical guarantees of finite space MCTS (Shah et al., 2019). We investigate, for the first time, the regret of the enhanced HOO algorithm in non-stationary bandit problems. Using this result as a building block, we establish non-asymptotic convergence guarantees for POLY-HOOT : the value estimate converges to an arbitrarily small neighborhood of the optimal value function at a polynomial rate. We further provide experimental results that corroborate our theoretical findings. 
    more » « less
  2. We consider the following general network design problem. The input is an asymmetric metric (V, c), root [Formula: see text], monotone submodular function [Formula: see text], and budget B. The goal is to find an r-rooted arborescence T of cost at most B that maximizes f(T). Our main result is a simple quasi-polynomial time [Formula: see text]-approximation algorithm for this problem, in which [Formula: see text] is the number of vertices in an optimal solution. As a consequence, we obtain an [Formula: see text]-approximation algorithm for directed (polymatroid) Steiner tree in quasi-polynomial time. We also extend our main result to a setting with additional length bounds at vertices, which leads to improved [Formula: see text]-approximation algorithms for the single-source buy-at-bulk and priority Steiner tree problems. For the usual directed Steiner tree problem, our result matches the best previous approximation ratio but improves significantly on the running time. For polymatroid Steiner tree and single-source buy-at-bulk, our result improves prior approximation ratios by a logarithmic factor. For directed priority Steiner tree, our result seems to be the first nontrivial approximation ratio. Under certain complexity assumptions, our approximation ratios are the best possible (up to constant factors). 
    more » « less
  3. We study linear Fisher markets with satiation. In these markets, sellers have earning limits, and buyers have utility limits. Beyond applications in economics, they arise in the context of maximizing Nash social welfare when allocating indivisible items to agents. In contrast to markets with either earning or utility limits, markets with both limits have not been studied before. They turn out to have fundamentally different properties. In general, the existence of competitive equilibria is not guaranteed. We identify a natural property of markets (termed money clearing) that implies existence. We show that the set of equilibria is not always convex, answering a question posed in the literature. We design an FPTAS to compute an approximate equilibrium and prove that the problem of computing an exact equilibrium lies in the complexity class continuous local search ([Formula: see text]; i.e., the intersection of polynomial local search ([Formula: see text]) and polynomial parity arguments on directed graphs ([Formula: see text])). For a constant number of buyers or goods, we give a polynomial-time algorithm to compute an exact equilibrium. We show how (approximate) equilibria can be rounded and provide the first constant-factor approximation algorithm (with a factor of 2.404) for maximizing Nash social welfare when agents have capped linear (also known as budget-additive) valuations. Finally, we significantly improve the approximation hardness for additive valuations to [Formula: see text]. Funding: J. Garg was supported by the National Science Foundation [Grant CCF-1942321 (CAREER)]. M. Hoefer was supported by Deutsche Forschungsgemeinschaft [Grants Ho 3831/5-1, Ho 3831/6-1, and Ho 3831/7-1]. 
    more » « less
  4. In this study, we simulated the algorithmic performance of a small neutral atom quantum computer and compared its performance when operating with all-to-all versus nearest-neighbor connectivity. This comparison was made using a suite of algorithmic benchmarks developed by the Quantum Economic Development Consortium. Circuits were simulated with a noise model consistent with experimental data from [Nature 604, 457 (2022)]. We find that all-to-all connectivity improves simulated circuit fidelity by [Formula: see text]–[Formula: see text], compared to nearest-neighbor connectivity. 
    more » « less
  5. null (Ed.)
    We study the dynamic assortment planning problem, where for each arriving customer, the seller offers an assortment of substitutable products and the customer makes the purchase among offered products according to an uncapacitated multinomial logit (MNL) model. Because all the utility parameters of the MNL model are unknown, the seller needs to simultaneously learn customers’ choice behavior and make dynamic decisions on assortments based on the current knowledge. The goal of the seller is to maximize the expected revenue, or, equivalently, to minimize the expected regret. Although dynamic assortment planning problem has received an increasing attention in revenue management, most existing policies require the estimation of mean utility for each product and the final regret usually involves the number of products [Formula: see text]. The optimal regret of the dynamic assortment planning problem under the most basic and popular choice model—the MNL model—is still open. By carefully analyzing a revenue potential function, we develop a trisection-based policy combined with adaptive confidence bound construction, which achieves an item-independent regret bound of [Formula: see text], where [Formula: see text] is the length of selling horizon. We further establish the matching lower bound result to show the optimality of our policy. There are two major advantages of the proposed policy. First, the regret of all our policies has no dependence on [Formula: see text]. Second, our policies are almost assumption-free: there is no assumption on mean utility nor any “separability” condition on the expected revenues for different assortments. We also extend our trisection search algorithm to capacitated MNL models and obtain the optimal regret [Formula: see text] (up to logrithmic factors) without any assumption on the mean utility parameters of items. 
    more » « less