Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

02/17/2022
by   Nika Haghtalab, et al.
0

In this paper, we study oracle-efficient algorithms for beyond worst-case analysis of online learning. We focus on two settings. First, the smoothed analysis setting of [RST11, HRS21] where an adversary is constrained to generating samples from distributions whose density is upper bounded by 1/σ times the uniform density. Second, the setting of K-hint transductive learning, where the learner is given access to K hints per time step that are guaranteed to include the true instance. We give the first known oracle-efficient algorithms for both settings that depend only on the VC dimension of the class and parameters σ and K that capture the power of the adversary. In particular, we achieve oracle-efficient regret bounds of O ( √(T dσ^-1/2) ) and O ( √(T d K ) ) respectively for these setting. For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS21]. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for worst-case setting with small domains. In particular, we give an oracle-efficient algorithm with regret of O ( √(T(d |𝒳)|^1/2)), which is a refinement of the earlier O ( √(T|𝒳|)) bound by [DS16].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset