Energy-Efficient Resource Management for Federated Edge Learning with CPU-GPU Heterogeneous Computing

07/14/2020
by   Qunsong Zeng, et al.
0

Edge machine learning involves the deployment of learning algorithms at the network edge to leverage massive distributed data and computation resources to train artificial intelligence (AI) models. Among others, the framework of federated edge learning (FEEL) is popular for its data-privacy preservation. FEEL coordinates global model training at an edge server and local model training at edge devices that are connected by wireless links. This work contributes to the energy-efficient implementation of FEEL in wireless networks by designing joint computation-and-communication resource management (C^2RM). The design targets the state-of-the-art heterogeneous mobile architecture where parallel computing using both a CPU and a GPU, called heterogeneous computing, can significantly improve both the performance and energy efficiency. To minimize the sum energy consumption of devices, we propose a novel C^2RM framework featuring multi-dimensional control including bandwidth allocation, CPU-GPU workload partitioning and speed scaling at each device, and C^2 time division for each link. The key component of the framework is a set of equilibriums in energy rates with respect to different control variables that are proved to exist among devices or between processing units at each device. The results are applied to designing efficient algorithms for computing the optimal C^2RM policies faster than the standard optimization tools. Based on the equilibriums, we further design energy-efficient schemes for device scheduling and greedy spectrum sharing that scavenges "spectrum holes" resulting from heterogeneous C^2 time divisions among devices. Using a real dataset, experiments are conducted to demonstrate the effectiveness of C^2RM on improving the energy efficiency of a FEEL system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset