Accelerating Training using Tensor Decomposition

09/10/2019
by   Mostafa Elhoushi, et al.
0

Tensor decomposition is one of the well-known approaches to reduce the latency time and number of parameters of a pre-trained model. However, in this paper, we propose an approach to use tensor decomposition to reduce training time of training a model from scratch. In our approach, we train the model from scratch (i.e., randomly initialized weights) with its original architecture for a small number of epochs, then the model is decomposed, and then continue training the decomposed model till the end. There is an optional step in our approach to convert the decomposed architecture back to the original architecture. We present results of using this approach on both CIFAR10 and Imagenet datasets, and show that there can be upto 2x speed up in training time with accuracy drop of upto 1.5 training acceleration approach is independent of hardware and is expected to have similar speed ups on both CPU and GPU platforms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset