D^2: Decentralized Training over Decentralized Data

03/19/2018
by   Hanlin Tang, et al.
0

While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be unique and different. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are not too different. In this paper, we ask the question: Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to the data variance across workers? In this paper, we present D^2, a novel decentralized parallel stochastic gradient descent algorithm designed for large data variance among workers (imprecisely, "decentralized" data). The core of D^2 is a variance blackuction extension of the standard D-PSGD algorithm, which improves the convergence rate from O(σ√(nT) + (nζ^2)^1/3 T^2/3) to O(σ√(nT)) where ζ^2 denotes the variance among data on different workers. As a result, D^2 is robust to data variance among workers. We empirically evaluated D^2 on image classification tasks where each worker has access to only the data of a limited set of labels, and find that D^2 significantly outperforms D-PSGD.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset