Knowledge distillation is a popular approach for enhancing the performan...
In this work, we study distributed optimization algorithms that reduce t...
Despite their high computation and communication costs, Newton-type meth...
Recent advances in distributed optimization have shown that Newton-type
...
Distributed machine learning has become an indispensable tool for traini...
Inspired by recent work of Islamov et al (2021), we propose a family of
...
Large scale distributed optimization has become the default tool for the...
Communicating information, like gradient vectors, between computing node...
In the last few years, various communication compression techniques have...
In order to mitigate the high communication cost in distributed and fede...