Policy Gradients for Contextual Bandits

02/12/2018
by   Feiyang Pan, et al.
0

We study a generalized contextual-bandits problem, where there is a state that decides the distribution of contexts of arms and affects the immediate reward when choosing an arm. The problem applies to a wide range of realistic settings such as personalized recommender systems and natural language generations. We put forward a class of policies in which the marginal probability of choosing an arm (in expectation of other arms) in each state has a simple closed form and is differentiable. In particular, the gradient of this class of policies is in a succinct form, which is an expectation of the action-value multiplied by the gradient of the marginal probability over pairs of states and single contexts. These findings naturally lead to an algorithm, coined policy gradient for contextual bandits (PGCB). As a further theoretical guarantee, we show that the variance of PGCB is less than the standard policy gradients algorithm. We also derive the off-policy gradients, and evaluate PGCB on a toy dataset as well as a music recommender dataset. Experiments show that PGCB outperforms both classic contextual-bandits methods and policy gradient methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset