Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise

02/14/2023
by   Zijian Liu, et al.
0

We consider the stochastic optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime, where the stochastic gradient's noise is assumed to have bounded pth moment (p∈(1,2]). Zhang et al. (2020) is the first to prove the Ω(T^1-p/3p-2) lower bound for convergence (in expectation) and provides a simple clipping algorithm that matches this optimal rate. Cutkosky and Mehta (2021) proposes another algorithm, which is shown to achieve the nearly optimal high-probability convergence guarantee O(log(T/δ)T^1-p/3p-2), where δ is the probability of failure. However, this desirable guarantee is only established under the additional assumption that the stochastic gradient itself is bounded in pth moment, which fails to hold even for quadratic objectives and centered Gaussian noise. In this work, we first improve the analysis of the algorithm in Cutkosky and Mehta (2021) to obtain the same nearly optimal high-probability convergence rate O(log(T/δ)T^1-p/3p-2), without the above-mentioned restrictive assumption. Next, and curiously, we show that one can achieve a faster rate than that dictated by the lower bound Ω(T^1-p/3p-2) with only a tiny bit of structure, i.e., when the objective function F(x) is assumed to be in the form of 𝔼_Ξ∼𝒟[f(x,Ξ)], arguably the most widely applicable class of stochastic optimization problems. For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of O(log(T/δ)T^1-p/2p-1) under a mild condition, which is faster than Ω(T^1-p/3p-2). Notably, even when specialized to the finite-variance case, our result yields the (near-)optimal high-probability rate O(log(T/δ)T^-1/3).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset