Double Doubly Robust Thompson Sampling for Generalized Linear Contextual Bandits

09/15/2022
by   Wonyoung Kim, et al.
0

We propose a novel contextual bandit algorithm for generalized linear rewards with an Õ(√(κ^-1ϕ T)) regret over T rounds where ϕ is the minimum eigenvalue of the covariance of contexts and κ is a lower bound of the variance of rewards. In several practical cases where ϕ=O(d), our result is the first regret bound for generalized linear model (GLM) bandits with the order √(d) without relying on the approach of Auer [2002]. We achieve this bound using a novel estimator called double doubly-robust (DDR) estimator, a subclass of doubly-robust (DR) estimator but with a tighter error bound. The approach of Auer [2002] achieves independence by discarding the observed rewards, whereas our algorithm achieves independence considering all contexts using our DDR estimator. We also provide an O(κ^-1ϕlog (NT) log T) regret bound for N arms under a probabilistic margin condition. Regret bounds under the margin condition are given by Bastani and Bayati [2020] and Bastani et al. [2021] under the setting that contexts are common to all arms but coefficients are arm-specific. When contexts are different for all arms but coefficients are common, ours is the first regret bound under the margin condition for linear models or GLMs. We conduct empirical studies using synthetic data and real examples, demonstrating the effectiveness of our algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset