FedComm: Understanding Communication Protocols for Edge-based Federated Learning
Federated learning (FL) trains machine learning (ML) models on devices using locally generated data and exchanges models without transferring raw data to a distant server. This exchange incurs a communication overhead and impacts the performance of FL training. There is limited understanding of how communication protocols specifically contribute to the performance of FL. Such an understanding is essential for selecting the right communication protocol when designing an FL system. This paper presents FedComm, a benchmarking methodology to quantify the impact of optimized application layer protocols, namely Message Queue Telemetry Transport (MQTT), Advanced Message Queuing Protocol (AMQP), and ZeroMQ Message Transport Protocol (ZMTP), and non-optimized application layer protocols, namely as TCP and UDP, on the performance of FL. FedComm measures the overall performance of FL in terms of communication time and accuracy under varying computational and network stress and packet loss rates. Experiments on a lab-based testbed demonstrate that TCP outperforms UDP as a non-optimized application layer protocol with higher accuracy and shorter communication times for 4G and Wi-Fi networks. Optimized application layer protocols such as AMQP, MQTT, and ZMTP outperformed non-optimized application layer protocols in most network conditions, resulting in a 2.5x reduction in communication time compared to TCP while maintaining accuracy. The experimental results enable us to highlight a number of open research issues for further investigation. FedComm is available for download from https://github.com/qub-blesson/FedComm.
READ FULL TEXT