On Submodular Contextual Bandits
We consider the problem of contextual bandits where actions are subsets of a ground set and mean rewards are modeled by an unknown monotone submodular function that belongs to a class ℱ. We allow time-varying matroid constraints to be placed on the feasible sets. Assuming access to an online regression oracle with regret 𝖱𝖾𝗀(ℱ), our algorithm efficiently randomizes around local optima of estimated functions according to the Inverse Gap Weighting strategy. We show that cumulative regret of this procedure with time horizon n scales as O(√(n 𝖱𝖾𝗀(ℱ))) against a benchmark with a multiplicative factor 1/2. On the other hand, using the techniques of (Filmus and Ward 2014), we show that an ϵ-Greedy procedure with local randomization attains regret of O(n^2/3𝖱𝖾𝗀(ℱ)^1/3) against a stronger (1-e^-1) benchmark.
READ FULL TEXT