Multi-task learning on the edge: cost-efficiency and theoretical optimality

10/09/2021
by   Sami Fakhry, et al.
0

This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset