Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions

06/15/2020
by   Tesi Xiao, et al.
0

We analyze stochastic conditional gradient type methods for constrained optimization problems arising in over-parametrized machine learning. We show that one could leverage the interpolation-like conditions satisfied by such models to obtain improved complexities for conditional gradient type methods. For the aforementioned class of problem, when the objective function is convex, we show that the conditional gradient method requires O(ϵ^-2) calls to the stochastic gradient oracle to find an ϵ-optimal solution. Furthermore, by including a gradient sliding step, the number of calls reduces to O(ϵ^-1.5). We also establish similar improved results in the zeroth-order setting, where only noisy function evaluations are available. Notably, the above results are achieved without any variance reduction techniques, thereby demonstrating the improved performance of vanilla versions of conditional gradient methods for over-parametrized machine learning problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset