Sample Efficient Policy Gradient Methods with Recursive Variance Reduction

09/18/2019
by   Pan Xu, et al.
11

Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires O(1/ϵ^3/2) episodes to find an ϵ-approximate stationary point of the nonconcave performance function J(θ) (i.e., θ such that ∇ J(θ)_2^2≤ϵ). This sample complexity improves the best known result O(1/ϵ^5/3) for policy gradient algorithms by a factor of O(1/ϵ^1/6). In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset