Near-optimal Reinforcement Learning using Bayesian Quantiles

06/20/2019
by   Aristide Tossou, et al.
0

We study model-based reinforcement learning in finite communicating Markov Decision Process. Algorithms in this settings have been developed in two different ways: the first view, which typically provides frequentist performance guarantees, uses optimism in the face of uncertainty as the guiding algorithmic principle. The second view is based on Bayesian reasoning, combined with posterior sampling and Bayesian guarantees. In this paper, we develop a conceptually simple algorithm, Bayes-UCRL that combines the benefits of both approaches to achieve state-of-the-art performance for finite communicating MDP. In particular, we use Bayesian Prior similarly to Posterior Sampling. However, instead of sampling the MDP, we construct an optimistic MDP using the quantiles of the Bayesian prior. We show that this technique enjoys a high probability worst-case regret of order Õ(√(DSAT)). Experiments in a diverse set of environments show that our algorithms outperform previous methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset