Towards Building a Robust and Fair Federated Learning System
Federated Learning (FL) has emerged as a promising practical framework for effective and scalable distributed machine learning. However, most existing FL or distributed learning frameworks have not addressed two important issues well together: collaborative fairness and robustness to non-contributing participants (e.g. free-riders, adversaries). In particular, all participants can receive the 'same' access to the global model, which is obviously unfair to the high-contributing participants. Furthermore, due to the lack of a safeguard mechanism, free-riders or malicious adversaries could game the system to access the global model for free or to sabotage it. By identifying the underlying similarity between these two issues, we investigate them simultaneously and propose a novel Robust and Fair Federated Learning (RFFL) framework which utilizes reputation scores to address both issues, thus ensuring the high-contributing participants are rewarded with high-performing models while the low- or non-contributing participants can be detected and removed. Furthermore, our approach differentiates itself by not requiring any auxiliary dataset for the reputation calculation. Extensive experiments on benchmark datasets demonstrate that RFFL achieves high fairness, is robust against several types of adversaries, delivers comparable accuracy to the conventional federated framework and outperforms the Standalone framework.
READ FULL TEXT