On Optimism in Model-Based Reinforcement Learning

06/21/2020
by   Aldo Pacchiano, et al.
0

The principle of optimism in the face of uncertainty is prevalent throughout sequential decision making problems such as multi-armed bandits and reinforcement learning (RL), often coming with strong theoretical guarantees. However, it remains a challenge to scale these approaches to the deep RL paradigm, which has achieved a great deal of attention in recent years. In this paper, we introduce a tractable approach to optimism via noise augmented Markov Decision Processes (MDPs), which we show can obtain a competitive regret bound: Õ( |S|H√(|S||A| T ) ) when augmenting using Gaussian noise, where T is the total number of environment steps. This tractability allows us to apply our approach to the deep RL setting, where we rigorously evaluate the key factors for success of optimistic model-based RL algorithms, bridging the gap between theory and practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset