Distributed Random Reshuffling over Networks

12/31/2021
by   Kun Huang, et al.
2

In this paper, we consider the distributed optimization problem where n agents, each possessing a local cost function, collaboratively minimize the average of the local cost functions over a connected network. To solve the problem, we propose a distributed random reshuffling (D-RR) algorithm that combines the classical distributed gradient descent (DGD) method and Random Reshuffling (RR). We show that D-RR inherits the superiority of RR for both smooth strongly convex and smooth nonconvex objective functions. In particular, for smooth strongly convex objective functions, D-RR achieves 𝒪(1/T^2) rate of convergence (here, T counts the total number of iterations) in terms of the squared distance between the iterate and the unique minimizer. When the objective function is assumed to be smooth nonconvex and has Lipschitz continuous component functions, we show that D-RR drives the squared norm of gradient to 0 at a rate of 𝒪(1/T^2/3). These convergence results match those of centralized RR (up to constant factors).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset