Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs

11/05/2021
by   Yeoneung Kim, et al.
5

In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, a considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve Õ(d^1.5√(∑_k^K σ_k^2) + d^2) where d is the dimension of the features, K is the time horizon, and σ_k^2 is the noise variance at time step k, and Õ ignores polylogarithmic dependence, which is a factor of d^3 improvement. For linear mixture MDPs, we achieve a horizon-free regret bound of Õ(d^1.5√(K) + d^3) where d is the number of base models and K is the number of episodes. This is a factor of d^3 improvement in the leading term and d^6 in the lower order term. Our analysis critically relies on a novel elliptical potential `count' lemma. This lemma allows a peeling-based regret analysis, which can be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset