Generalization is a central challenge for the deployment of reinforcement learning (RL) systems in the real world. In this paper, we show that the sequential structure of the RL problem necessitates new approaches to generalization beyond the well-studied techniques used in supervised learning. While supervised learning methods can generalize effectively without explicitly accounting for epistemic uncertainty, we describe why appropriate uncertainty handling can actually be essential in RL. We show that generalization to unseen test conditions from a limited number of training conditions induces a kind of implicit partial observability, effectively turning even fully-observed MDPs into POMDPs. Informed by this observation, we recast the problem of generalization in RL as solving the induced partially observed Markov decision process, which we call the epistemic POMDP. We demonstrate the failure modes of algorithms that do not appropriately handle this partial observability, and suggest a simple ensemble-based technique for approximately solving the partially observed problem. Empirically, we demonstrate that our simple algorithm derived from the epistemic POMDP achieves significant gains in generalization over current methods on the Procgen benchmark suite.
more »
« less
Reinforcement Learning from Delayed Observations via World Models
In standard reinforcement learning settings, agents typically assume immediate feedback about the effects of their actions after taking them. However, in practice, this assumption may not hold true due to physical constraints and can significantly impact the performance of learning algorithms. In this paper, we address observation delays in partially observable environments. We propose leveraging world models, which have shown success in integrating past observations and learning dynamics, to handle observation delays. By reducing delayed POMDPs to delayed MDPs with world models, our methods can effectively handle partial observability, where existing approaches achieve sub-optimal performance or degrade quickly as observability decreases. Experiments suggest that one of our methods can outperform a naive model-based approach by up to 250%. Moreover, we evaluate our methods on visual delayed environments, for the first time showcasing delay-aware reinforcement learning continuous control with visual observations.
more »
« less
- Award ID(s):
- 2321786
- PAR ID:
- 10533604
- Publisher / Repository:
- CoRR abs/2403.12309
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recent studies in reinforcement learning (RL) have made significant progress by leveraging function approximation to alleviate the sample complexity hurdle for better performance. Despite the success, existing provably efficient algorithms typically rely on the accessibility of immediate feedback upon taking actions. The failure to account for the impact of delay in observations can significantly degrade the performance of real-world systems due to the regret blow-up. In this work, we tackle the challenge of delayed feedback in RL with linear function approximation by employing posterior sampling, which has been shown to empirically outperform the popular UCB algorithms in a wide range of regimes. We first introduce Delayed-PSVI, an optimistic value-based algorithm that effectively explores the value function space via noise perturbation with posterior sampling. We provide the first analysis for posterior sampling algorithms with delayed feedback in RL and show our algorithm achieves $$\widetilde{O}(\sqrt{d^3H^3 T} + d^2H^2 E[\tau])$$ worst-case regret in the presence of unknown stochastic delays. Here $$E[\tau]$$ is the expected delay. To further improve its computational efficiency and to expand its applicability in high-dimensional RL problems, we incorporate a gradient-based approximate sampling scheme via Langevin dynamics for Delayed-LPSVI, which maintains the same order-optimal regret guarantee with $$\widetilde{O}(dHK)$$ computational cost. Empirical evaluations are performed to demonstrate the statistical and computational efficacy of our algorithms.more » « less
-
Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time.more » « less
-
IntroductionAs robot teleoperation increasingly becomes integral in executing tasks in distant, hazardous, or inaccessible environments, operational delays remain a significant obstacle. These delays, inherent in signal transmission and processing, adversely affect operator performance, particularly in tasks requiring precision and timeliness. While current research has made strides in mitigating these delays through advanced control strategies and training methods, a crucial gap persists in understanding the neurofunctional impacts of these delays and the efficacy of countermeasures from a cognitive perspective. MethodsThis study addresses the gap by leveraging functional Near-Infrared Spectroscopy (fNIRS) to examine the neurofunctional implications of simulated haptic feedback on cognitive activity and motor coordination under delayed conditions. In a human-subject experiment (N= 41), sensory feedback was manipulated to observe its influences on various brain regions of interest (ROIs) during teleoperation tasks. The fNIRS data provided a detailed assessment of cerebral activity, particularly in ROIs implicated in time perception and the execution of precise movements. ResultsOur results reveal that the anchoring condition, which provided immediate simulated haptic feedback with a delayed visual cue, significantly optimized neural functions related to time perception and motor coordination. This condition also improved motor performance compared to the asynchronous condition, where visual and haptic feedback were misaligned. DiscussionThese findings provide empirical evidence about the neurofunctional basis of the enhanced motor performance with simulated synthetic force feedback in the presence of teleoperation delays. The study highlights the potential for immediate haptic feedback to mitigate the adverse effects of operational delays, thereby improving the efficacy of teleoperation in critical applications.more » « less
-
null (Ed.)Manipulating deformable objects has long been a challenge in robotics due to its high dimensional state representation and complex dynamics. Recent success in deep reinforcement learning provides a promising direction for learning to manipulate deformable objects with data driven methods. However, existing reinforcement learning benchmarks only cover tasks with direct state observability and simple low-dimensional dynamics or with relatively simple image-based environments, such as those with rigid objects. In this paper, we present SoftGym, a set of open-source simulated benchmarks for manipulating deformable objects, with a standard OpenAI Gym API and a Python interface for creating new environments. Our benchmark will enable reproducible research in this important area. Further, we evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms, including dealing with a state representation that has a high intrinsic dimensionality and is partially observable. The experiments and analysis indicate the strengths and limitations of existing methods in the context of deformable object manipulation that can help point the way forward for future methods development.more » « less
An official website of the United States government

