Low-Switching Policy Gradient with Exploration via Online Sensitivity Sampling

06/15/2023
by   Yunfan Li, et al.
5

Policy optimization methods are powerful algorithms in Reinforcement Learning (RL) for their flexibility to deal with policy parameterization and ability to handle model misspecification. However, these methods usually suffer from slow convergence rates and poor sample complexity. Hence it is important to design provably sample efficient algorithms for policy optimization. Yet, recent advances for this problems have only been successful in tabular and linear setting, whose benign structures cannot be generalized to non-linearly parameterized policies. In this paper, we address this problem by leveraging recent advances in value-based algorithms, including bounded eluder-dimension and online sensitivity sampling, to design a low-switching sample-efficient policy optimization algorithm, LPO, with general non-linear function approximation. We show that, our algorithm obtains an ε-optimal policy with only O(poly(d)/ε^3) samples, where ε is the suboptimality gap and d is a complexity measure of the function class approximating the policy. This drastically improves previously best-known sample bound for policy optimization algorithms, O(poly(d)/ε^8). Moreover, we empirically test our theory with deep neural nets to show the benefits of the theoretical inspiration.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset