One time is not enough: iterative tensor decomposition for neural network compression

03/24/2019
by   Julia Gusak, et al.
24

The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset