Timely Asynchronous Hierarchical Federated Learning: Age of Convergence
We consider an asynchronous hierarchical federated learning (AHFL) setting with a client-edge-cloud framework. The clients exchange the trained parameters with their corresponding edge servers, which update the locally aggregated model. This model is then transmitted to all the clients in the local cluster. The edge servers communicate to the central cloud server for global model aggregation. The goal of each client is to converge to the global model, while maintaining timeliness of the clients, i.e., having optimum training iteration time. We investigate the convergence criteria for such a system with dense clusters. Our analysis shows that for a system of n clients with fixed average timeliness, the convergence in finite time is probabilistically guaranteed, if the nodes are divided into O(1) number of clusters, that is, if the system is built as a sparse set of edge servers with dense client bases each.
READ FULL TEXT