Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits

06/13/2023
by   Lequn Wang, et al.
0

We consider policy optimization in contextual bandits, where one is given a fixed dataset of logged interactions. While pessimistic regularizers are typically used to mitigate distribution shift, prior implementations thereof are not computationally efficient. We present the first oracle-efficient algorithm for pessimistic policy optimization: it reduces to supervised learning, leading to broad applicability. We also obtain best-effort statistical guarantees analogous to those for pessimistic approaches in prior work. We instantiate our approach for both discrete and continuous actions. We perform extensive experiments in both settings, showing advantage over unregularized policy optimization across a wide range of configurations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset