Can Decentralized Learning be more robust than Federated Learning?

03/07/2023
by   Mathilde Raynal, et al.
0

Decentralized Learning (DL) is a peer–to–peer learning approach that allows a group of users to jointly train a machine learning model. To ensure correctness, DL should be robust, i.e., Byzantine users must not be able to tamper with the result of the collaboration. In this paper, we introduce two new attacks against DL where a Byzantine user can: make the network converge to an arbitrary model of their choice, and exclude an arbitrary user from the learning process. We demonstrate our attacks' efficiency against Self–Centered Clipping, the state–of–the–art robust DL protocol. Finally, we show that the capabilities decentralization grants to Byzantine users result in decentralized learning always providing less robustness than federated learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset