Nearly Minimax Optimal Reinforcement Learning with Linear Function Approximation

06/23/2022
by   Pihe Hu, et al.
2

We study reinforcement learning with linear function approximation where the transition probability and reward functions are linear with respect to a feature mapping ϕ(s,a). Specifically, we consider the episodic inhomogeneous linear Markov Decision Process (MDP), and propose a novel computation-efficient algorithm, LSVI-UCB^+, which achieves an O(Hd√(T)) regret bound where H is the episode length, d is the feature dimension, and T is the number of steps. LSVI-UCB^+ builds on weighted ridge regression and upper confidence value iteration with a Bernstein-type exploration bonus. Our statistical results are obtained with novel analytical tools, including a new Bernstein self-normalized bound with conservatism on elliptical potentials, and refined analysis of the correction term. To the best of our knowledge, this is the first minimax optimal algorithm for linear MDPs up to logarithmic factors, which closes the √(Hd) gap between the best known upper bound of O(√(H^3d^3T)) in <cit.> and lower bound of Ω(Hd√(T)) for linear MDPs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset