Complete Policy Regret Bounds for Tallying Bandits

04/24/2022
by   Dhruv Malik, et al.
0

Policy regret is a well established notion of measuring the performance of an online learning algorithm against an adaptive adversary. We study restrictions on the adversary that enable efficient minimization of the complete policy regret, which is the strongest possible version of policy regret. We identify a gap in the current theoretical understanding of what sorts of restrictions permit tractability in this challenging setting. To resolve this gap, we consider a generalization of the stochastic multi armed bandit, which we call the tallying bandit. This is an online learning setting with an m-memory bounded adversary, where the average loss for playing an action is an unknown function of the number (or tally) of times that the action was played in the last m timesteps. For tallying bandit problems with K actions and time horizon T, we provide an algorithm that w.h.p achieves a complete policy regret guarantee of 𝒪̃(mK√(T)), where the 𝒪̃ notation hides only logarithmic factors. We additionally prove an Ω̃(√(m K T)) lower bound on the expected complete policy regret of any tallying bandit algorithm, demonstrating the near optimality of our method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset