An Experimental Design Approach for Regret Minimization in Logistic Bandits

02/04/2022
by   Blake Mason, et al.
5

In this work we consider the problem of regret minimization for logistic bandits. The main challenge of logistic bandits is reducing the dependence on a potentially large problem dependent constant κ that can at worst scale exponentially with the norm of the unknown parameter θ_∗. Abeille et al. (2021) have applied self-concordance of the logistic function to remove this worst-case dependence providing regret guarantees like O(dlog^2(κ)√(μ̇T)log(|𝒳|)) where d is the dimensionality, T is the time horizon, and μ̇ is the variance of the best-arm. This work improves upon this bound in the fixed arm setting by employing an experimental design procedure that achieves a minimax regret of O(√(d μ̇Tlog(|𝒳|))). Our regret bound in fact takes a tighter instance (i.e., gap) dependent regret bound for the first time in logistic bandits. We also propose a new warmup sampling algorithm that can dramatically reduce the lower order term in the regret in general and prove that it can replace the lower order term dependency on κ to log^2(κ) for some instances. Finally, we discuss the impact of the bias of the MLE on the logistic bandit problem, providing an example where d^2 lower order regret (cf., it is d for linear bandits) may not be improved as long as the MLE is used and how bias-corrected estimators may be used to make it closer to d.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset