Robust Federated Learning Using ADMM in the Presence of Data Falsifying Byzantines

10/14/2017
by   Qunwei Li, et al.
0

In this paper, we consider the problem of federated (or decentralized) learning using ADMM with multiple agents. We consider a scenario where a certain fraction of agents (referred to as Byzantines) provide falsified data to the system. In this context, we study the convergence behavior of the decentralized ADMM algorithm. We show that ADMM converges linearly to a neighborhood of the solution to the problem under certain conditions. We next provide guidelines for network structure design to achieve faster convergence. Next, we provide necessary conditions on the falsified updates for exact convergence to the true solution. To tackle the data falsification problem, we propose a robust variant of ADMM. We also provide simulation results to validate the analysis and show the resilience of the proposed algorithm to Byzantines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset