Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity

08/07/2022
by   Sally Dong, et al.
0

Many fundamental problems in machine learning can be formulated by the convex program min_θ∈ R^d ∑_i=1^nf_i(θ), where each f_i is a convex, Lipschitz function supported on a subset of d_i coordinates of θ. One common approach to this problem, exemplified by stochastic gradient descent, involves sampling one f_i term at every iteration to make progress. This approach crucially relies on a notion of uniformity across the f_i's, formally captured by their condition number. In this work, we give an algorithm that minimizes the above convex formulation to ϵ-accuracy in O(∑_i=1^n d_i log (1 /ϵ)) gradient computations, with no assumptions on the condition number. The previous best algorithm independent of the condition number is the standard cutting plane method, which requires O(nd log (1/ϵ)) gradient computations. As a corollary, we improve upon the evaluation oracle complexity for decomposable submodular minimization by Axiotis et al. (ICML 2021). Our main technical contribution is an adaptive procedure to select an f_i term at every iteration via a novel combination of cutting-plane and interior-point methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset