Adaptive Momentum-Based Policy Gradient with Second-Order Information

05/17/2022
by   Saber Salehkaleybar, et al.
0

The variance reduced gradient estimators for policy gradient methods has been one of the main focus of research in the reinforcement learning in recent years as they allow acceleration of the estimation process. We propose a variance reduced policy gradient method, called SGDHess-PG, which incorporates second-order information into stochastic gradient descent (SGD) using momentum with an adaptive learning rate. SGDHess-PG algorithm can achieve ϵ-approximate first-order stationary point with Õ(ϵ^-3) number of trajectories, while using a batch size of O(1) at each iteration. Unlike most previous work, our proposed algorithm does not require importance sampling techniques which can compromise the advantage of variance reduction process. Our extensive experimental results show the effectiveness of the proposed algorithm on various control tasks and its advantage over the state of the art in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset