Fighting Bandits with a New Kind of Smoothness

12/14/2015
by   Jacob Abernethy, et al.
0

We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the Tsallis entropy, which includes EXP3 as a special case, achieves the Θ(√(TN)) minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(√(TN N)) if the perturbation distribution has a bounded hazard rate. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset