Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret

06/30/2019
by   Hakan Gokcesu, et al.
0

We introduce an online convex optimization algorithm using projected sub-gradient descent with ideal adaptive learning rates, where each computation is efficiently done in a sequential manner. For the first time in the literature, this algorithm provides an adaptively minimax optimal dynamic regret guarantee for a sequence of convex functions without any restrictions -- such as strong convexity, smoothness or even Lipschitz continuity -- against a comparator decision sequence with bounded total successive changes. We show optimality by generating the worst-case dynamic regret adaptive lower bound, which constitutes of actual sub-gradient norms and matches with our guarantees. We discuss the advantages of our algorithm as opposed to adaptive projection with sub-gradient self outer products and also derive the extension for independent learning in each decision coordinate separately. Additionally, we demonstrate how to best preserve our guarantees when the bound on total successive changes in the dynamic comparator sequence grows as time goes, in a truly online manner.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset