Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation

01/30/2023
by   Uri Sherman, et al.
0

We study reinforcement learning with linear function approximation and adversarially changing cost functions, a setup that has mostly been considered under simplifying assumptions such as full information feedback or exploratory conditions.We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback, featuring a combination of mirror-descent and least squares policy evaluation in an auxiliary MDP used to compute exploration bonuses.Our algorithm obtains an O(K^6/7) regret bound, improving significantly over previous state-of-the-art of O (K^14/15) in this setting. In addition, we present a version of the same algorithm under the assumption a simulator of the environment is available to the learner (but otherwise no exploratory assumptions are made), and prove it obtains state-of-the-art regret of O (K^2/3).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset