Heterogeneous Federated Learning on a Graph

09/19/2022
by   Huiyuan Wang, et al.
9

Federated learning, where algorithms are trained across multiple decentralized devices without sharing local data, is increasingly popular in distributed machine learning practice. Typically, a graph structure G exists behind local devices for communication. In this work, we consider parameter estimation in federated learning with data distribution and communication heterogeneity, as well as limited computational capacity of local devices. We encode the distribution heterogeneity by parametrizing distributions on local devices with a set of distinct p-dimensional vectors. We then propose to jointly estimate parameters of all devices under the M-estimation framework with the fused Lasso regularization, encouraging an equal estimate of parameters on connected devices in G. We provide a general result for our estimator depending on G, which can be further calibrated to obtain convergence rates for various specific problem setups. Surprisingly, our estimator attains the optimal rate under certain graph fidelity condition on G, as if we could aggregate all samples sharing the same distribution. If the graph fidelity condition is not met, we propose an edge selection procedure via multiple testing to ensure the optimality. To ease the burden of local computation, a decentralized stochastic version of ADMM is provided, with convergence rate O(T^-1log T) where T denotes the number of iterations. We highlight that, our algorithm transmits only parameters along edges of G at each iteration, without requiring a central machine, which preserves privacy. We further extend it to the case where devices are randomly inaccessible during the training process, with a similar algorithmic convergence guarantee. The computational and statistical efficiency of our method is evidenced by simulation experiments and the 2020 US presidential election data set.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset