Linear Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation

06/08/2021
by   Semih Cayci, et al.
0

Natural policy gradient (NPG) methods with function approximation achieve impressive empirical success in reinforcement learning problems with large state-action spaces. However, theoretical understanding of their convergence behaviors remains limited in the function approximation setting. In this paper, we perform a finite-time analysis of NPG with linear function approximation and softmax parameterization, and prove for the first time that widely used entropy regularization method, which encourages exploration, leads to linear convergence rate. We adopt a Lyapunov drift analysis to prove the convergence results and explain the effectiveness of entropy regularization in improving the convergence rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset