Generalized Federated Learning via Sharpness Aware Minimization

06/06/2022
by   Zhe Qu, et al.
0

Federated Learning (FL) is a promising framework for performing privacy-preserving, distributed learning with a set of clients. However, the data distribution among clients often exhibits non-IID, i.e., distribution shift, which makes efficient optimization difficult. To tackle this problem, many FL algorithms focus on mitigating the effects of data heterogeneity across clients by increasing the performance of the global model. However, almost all algorithms leverage Empirical Risk Minimization (ERM) to be the local optimizer, which is easy to make the global model fall into a sharp valley and increase a large deviation of parts of local clients. Therefore, in this paper, we revisit the solutions to the distribution shift problem in FL with a focus on local learning generality. To this end, we propose a general, effective algorithm, , based on Sharpness Aware Minimization (SAM) local optimizer, and develop a momentum FL algorithm to bridge local and global models, . Theoretically, we show the convergence analysis of these two algorithms and demonstrate the generalization bound of . Empirically, our proposed algorithms substantially outperform existing FL studies and significantly decrease the learning deviation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset