Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners
We study the problem of best arm identification in linearly parameterised multi-armed bandits. Given a set of feature vectors X⊂R^d, a confidence parameter δ and an unknown vector θ^*, the goal is to identify max_x∈Xx^Tθ^*, with probability at least 1-δ, using noisy measurements of the form x^Tθ^*. For this fixed confidence (δ-PAC) setting, we propose an explicitly implementable and provably order-optimal sample-complexity algorithm to solve this problem. Previous approaches rely on access to minimax optimization oracles. The algorithm, which we call the Phased Elimination Linear Exploration Game (PELEG), maintains a high-probability confidence ellipsoid containing θ^* in each round and uses it to eliminate suboptimal arms in phases. PELEG achieves fast shrinkage of this confidence ellipsoid along the most confusing (i.e., close to, but not optimal) directions by interpreting the problem as a two player zero-sum game, and sequentially converging to its saddle point using low-regret learners to compute players' strategies in each round. We analyze the sample complexity of PELEG and show that it matches, up to order, an instance-dependent lower bound on sample complexity in the linear bandit setting. We also provide numerical results for the proposed algorithm consistent with its theoretical guarantees.
READ FULL TEXT