Planning with Uncertainty: Deep Exploration in Model-Based Reinforcement Learning

10/21/2022
by   Yaniv Oren, et al.
0

Deep model-based Reinforcement Learning (RL) has shown super-human performance in many challenging domains. Low sample efficiency and limited exploration remain as leading obstacles in the field, however. In this paper, we demonstrate deep exploration in model-based RL by incorporating epistemic uncertainty into planning trees, circumventing the standard approach of propagating uncertainty through value learning. We evaluate this approach with the state of the art model-based RL algorithm MuZero, and extend its training process to stabilize learning from explicitly-exploratory trajectories. In our experiments planning with uncertainty is able to demonstrate effective deep exploration with standard uncertainty estimation mechanisms, and with it significant gains in sample efficiency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset