Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave Minimax Problem

10/10/2021
by   Luo Luo, et al.
0

We study the smooth minimax optimization problem of the form min_ xmax_ y f( x, y), where the objective function is strongly-concave in y but possibly nonconvex in x. This problem includes a lot of applications in machine learning such as regularized GAN, reinforcement learning and adversarial training. Most of existing theory related to gradient descent accent focus on establishing the convergence result for achieving the first-order stationary point of f( x, y) or primal function P( x)≜max_ y f( x, y). In this paper, we design a new optimization method via cubic Newton iterations, which could find an 𝒪(ε,κ^1.5√(ρε))-second-order stationary point of P( x) with 𝒪(κ^1.5√(ρ)ε^-1.5) second-order oracle calls and 𝒪̃(κ^2√(ρ)ε^-1.5) first-order oracle calls, where κ is the condition number and ρ is the Hessian smoothness coefficient of f( x, y). For high-dimensional problems, we propose an variant algorithm to avoid expensive cost form second-order oracle, which solves the cubic sub-problem inexactly via gradient descent and matrix Chebyshev expansion. This strategy still obtains desired approximate second-order stationary point with high probability but only requires 𝒪̃(κ^1.5ℓε^-2) Hessian-vector oracle and 𝒪̃(κ^2√(ρ)ε^-1.5) first-order oracle calls. To the best of our knowledge, this is the first work considers non-asymptotic convergence behavior of finding second-order stationary point for minimax problem without convex-concave assumption.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset