Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

06/10/2020
by   Peng Zhao, et al.
0

In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most O(min{P_T,S_T}), where P_T and S_T are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to O(min{P_T,S_T,V_T}), where V_T is the function variation of the online functions. Note that the quantities of P_T, S_T, V_T essentially reflect different aspects of environmental non-stationarity—they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a best-of-three-worlds guarantee and is strictly tighter than previous results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset