Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To address this issue, we introduce the Factored BA-POMDP model (FBA-POMDP), a framework that is able to learn a compact model of the dynamics by exploiting the underlying structure of a POMDP. The FBA-POMDP framework casts the problem as a planning task, for which we adapt the Monte-Carlo Tree Search planning algorithm and develop a belief tracking method to approximate the joint posterior over the state and model variables. Our empirical results show that this method outperforms a number of BRL baselines and is able to learn efficiently when the factorization is known, as well as learn both the factorization and the model parameters simultaneously.
more »
« less
Planning to plan: a Bayesian model for optimizing the depth of decision tree search
Planning, the process of evaluating the future consequences of actions, is typically formalized as search over a decision tree. This procedure increases expected rewards but is computationally expensive. Past attempts to understand how people mitigate the costs of planning have been guided by heuristics or the accumulation of prior experience, both of which are intractable in novel, high-complexity tasks. In this work, we propose a normative framework for optimizing the depth of tree search. Specifically, we model a metacognitive process via Bayesian inference to compute optimal planning depth. We show that our model makes sensible predictions over a range of parameters without relying on retrospection and that integrating past experiences into our model produces results that are consistent with the transition from goal-directed to habitual behavior over time and the uncertainty associated with prospective and retrospective estimates. Finally, we derive an online variant of our model that replicates these results.
more »
« less
- Award ID(s):
- 2008331
- PAR ID:
- 10377457
- Date Published:
- Journal Name:
- Proceedings of the Annual Meeting of the Cognitive Science Society
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Using sampling to estimate the connectivity of high-dimensional configuration spaces has been the theoretical underpinning for effective sampling-based motion planners. Typical strategies either build a roadmap, or a tree as the underlying search structure that connects sampled configurations, with a focus on guaranteeing completeness and optimality as the number of samples tends to infinity. Roadmap-based planners allow preprocessing the space, and can solve multiple kinematic motion planning problems, but need a steering function to connect pairwise-states. Such steering functions are difficult to define for kinodynamic systems, and limit the applicability of roadmaps to motion planning problems with dynamical systems. Recent advances in the analysis of single query tree-based planners has shown that forward search trees based on random propagations are asymptotically optimal. The current work leverages these recent results and proposes a multi-query framework for kinodynamic planning. Bundles of kinodynamic edges can be sampled to cover the state space before the query arrives. Then, given a motion planning query, the connectivity of the state space reachable from the start can be recovered from a forward search tree reasoning about a local neighborhood of the edge bundle from each tree node. The work demonstrates theoretically that considering any constant radial neighborhood during this process is sufficient to guarantee asymptotic optimality. Experimental validation in five and twelve dimensional simulated systems also highlights the ability of the proposed edge bundles to express high-quality kinodynamic solutions. Our approach consistently finds higher quality solutions compared to SST, and RRT, often with faster initial solution times. The strategy of sampling kinodynamic edges is demonstrated to be a promising new paradigm.more » « less
-
This paper presents a hybrid online Partially Observable Markov Decision Process (POMDP) planning system that addresses the problem of autonomous navigation in the presence of multi-modal uncertainty introduced by other agents in the environment. As a particular example, we consider the problem of autonomous navigation in dense crowds of pedestrians and among obstacles. Popular approaches to this problem first generate a path using a complete planner (e.g., Hybrid A*) with ad-hoc assumptions about uncertainty, then use online tree-based POMDP solvers to reason about uncertainty with control over a limited aspect of the problem (i.e. speed along the path). We present a more capable and responsive real-time approach enabling the POMDP planner to control more degrees of freedom (e.g., both speed AND heading) to achieve more flexible and efficient solutions. This modification greatly extends the region of the state space that the POMDP planner must reason over, significantly increasing the importance of finding effective roll-out policies within the limited computational budget that real time control affords. Our key insight is to use multi-query motion planning techniques (e.g., Probabilistic Roadmaps or Fast Marching Method) as priors for rapidly generating efficient roll-out policies for every state that the POMDP planning tree might reach during its limited horizon search. Our proposed approach generates trajectories that are safe and significantly more efficient than the previous approach, even in densely crowded dynamic environments with long planning horizons.more » « less
-
In this paper, we explore the possibility of search-based agents in games with resource-intensive forward models. We implemented a player agent in the Pommerman framework and put it against the baseline agent to measure its performance. We implemented a heuristic agent and improved it by enabling depth-limited tree search in specific gameplay moments. We also compared different node selection methods during depth-limited tree search. Our result shows that depth-limited tree search is still viable when presented with inefficient forward models and exploitation-driven selection method is the most efficient in this specific domain.more » « less
-
Sequential decision-making under uncertainty is present in many important problems. Two popular approaches for tackling such problems are reinforcement learning and online search (e.g., Monte Carlo tree search). While the former learns a policy by interacting with the environment (typically done before execution), the latter uses a generative model of the environment to sample promising action trajectories at decision time. Decision-making is particularly challenging in non-stationary environments, where the environment in which an agent operates can change over time. Both approaches have shortcomings in such settings -- on the one hand, policies learned before execution become stale when the environment changes and relearning takes both time and computational effort. Online search, on the other hand, can return sub-optimal actions when there are limitations on allowed runtime. In this paper, we introduce \textit{Policy-Augmented Monte Carlo tree search} (PA-MCTS), which combines action-value estimates from an out-of-date policy with an online search using an up-to-date model of the environment. We prove theoretical results showing conditions under which PA-MCTS selects the one-step optimal action and also bound the error accrued while following PA-MCTS as a policy. We compare and contrast our approach with AlphaZero, another hybrid planning approach, and Deep Q Learning on several OpenAI Gym environments. Through extensive experiments, we show that under non-stationary settings with limited time constraints, PA-MCTS outperforms these baselines.more » « less
An official website of the United States government

