Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed Bandit with Constraints

11/27/2022
by   Hengquan Guo, et al.
0

This paper studies the problem of stochastic continuum-armed bandit with constraints (SCBwC), where we optimize a black-box reward function f(x) subject to a black-box constraint function g(x)≤ 0 over a continuous space 𝒳. We model reward and constraint functions via Gaussian processes (GPs) and propose a Rectified Pessimistic-Optimistic Learning framework (RPOL), a penalty-based method incorporating optimistic and pessimistic GP bandit learning for reward and constraint functions, respectively. We consider the metric of cumulative constraint violation ∑_t=1^T(g(x_t))^+, which is strictly stronger than the traditional long-term constraint violation ∑_t=1^Tg(x_t). The rectified design for the penalty update and the pessimistic learning for the constraint function in RPOL guarantee the cumulative constraint violation is minimal. RPOL can achieve sublinear regret and cumulative constraint violation for SCBwC and its variants (e.g., under delayed feedback and non-stationary environment). These theoretical results match their unconstrained counterparts. Our experiments justify RPOL outperforms several existing baseline algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset