An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to Convex-Concave Saddle-Point Problems

01/23/2020
by   Haihao Lu, et al.
0

There has been a long history of using Ordinary Differential Equations (ODEs) to understand the dynamic of discrete-time optimization algorithms. However, one major difficulty of applying this approach is that there can be multiple ODEs that correspond to the same discrete-time algorithm, depending on how to take the continuous limit, which makes it unclear how to obtain the suitable ODE from a discrete-time optimization algorithm. Inspired by the recent paper <cit.>, we propose the r-th degree ODE expansion of a discrete-time optimization algorithm, which provides a principal approach to construct the unique O(s^r)-resolution ODE systems for a given discrete-time algorithm, where s is the step-size of the algorithm. We utilize this machinery to study three classic algorithms – gradient method (GM), proximal point method (PPM) and extra-gradient method (EGM) – for finding the solution to the unconstrained convex-concave saddle-point problem min_x∈^nmax_y∈^m L(x,y), which explains their puzzling convergent/divergent behaviors when L(x,y) is a bilinear function. Moreover, their O(s)-resolution ODEs inspire us to define the O(s)-linear-convergence condition on L(x,y), under which PPM and EGM exhabit linear convergence. This condition not only unifies the known linear convergence rate of PPM and EGM, but also showcases that these two algorithms exhibit linear convergence in broader contexts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset