Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs

06/02/2022
by   Shinji Ito, et al.
0

This study considers online learning with general directed feedback graphs. For this problem, we present best-of-both-worlds algorithms that achieve nearly tight regret bounds for adversarial environments as well as poly-logarithmic regret bounds for stochastic environments. As Alon et al. [2015] have shown, tight regret bounds depend on the structure of the feedback graph: strongly observable graphs yield minimax regret of Θ̃( α^1/2 T^1/2 ), while weakly observable graphs induce minimax regret of Θ̃( δ^1/3 T^2/3 ), where α and δ, respectively, represent the independence number of the graph and the domination number of a certain portion of the graph. Our proposed algorithm for strongly observable graphs has a regret bound of Õ( α^1/2 T^1/2 ) for adversarial environments, as well as of O ( α (ln T)^3 /Δ_min ) for stochastic environments, where Δ_min expresses the minimum suboptimality gap. This result resolves an open question raised by Erez and Koren [2021]. We also provide an algorithm for weakly observable graphs that achieves a regret bound of Õ( δ^1/3T^2/3 ) for adversarial environments and poly-logarithmic regret for stochastic environments. The proposed algorithms are based on the follow-the-perturbed-leader approach combined with newly designed update rules for learning rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro