Variance-reduced Clipping for Non-convex Optimization

03/02/2023
by   Amirhossein Reisizadeh, et al.
0

Gradient clipping is a standard training technique used in deep learning applications such as large-scale language modeling to mitigate exploding gradients. Recent experimental studies have demonstrated a fairly special behavior in the smoothness of the training objective along its trajectory when trained with gradient clipping. That is, the smoothness grows with the gradient norm. This is in clear contrast to the well-established assumption in folklore non-convex optimization, a.k.a. L-smoothness, where the smoothness is assumed to be bounded by a constant L globally. The recently introduced (L_0,L_1)-smoothness is a more relaxed notion that captures such behavior in non-convex optimization. In particular, it has been shown that under this relaxed smoothness assumption, SGD with clipping requires O(ϵ^-4) stochastic gradient computations to find an ϵ-stationary solution. In this paper, we employ a variance reduction technique, namely SPIDER, and demonstrate that for a carefully designed learning rate, this complexity is improved to O(ϵ^-3) which is order-optimal. The corresponding learning rate comprises the clipping technique to mitigate the growing smoothness. Moreover, when the objective function is the average of n components, we improve the existing O(nϵ^-2) bound on the stochastic gradient complexity to order-optimal O(√(n)ϵ^-2 + n).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset