Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation
A common technique for compressing a neural network is to compute the k-rank ℓ_2 approximation A_k,2 of the matrix A∈ℝ^n× d that corresponds to a fully connected layer (or embedding layer). Here, d is the number of the neurons in the layer, n is the number in the next one, and A_k,2 can be stored in O((n+d)k) memory instead of O(nd). This ℓ_2-approximation minimizes the sum over every entry to the power of p=2 in the matrix A - A_k,2, among every matrix A_k,2∈ℝ^n× d whose rank is k. While it can be computed efficiently via SVD, the ℓ_2-approximation is known to be very sensitive to outliers ("far-away" rows). Hence, machine learning uses e.g. Lasso Regression, ℓ_1-regularization, and ℓ_1-SVM that use the ℓ_1-norm. This paper suggests to replace the k-rank ℓ_2 approximation by ℓ_p, for p∈ [1,2]. We then provide practical and provable approximation algorithms to compute it for any p≥1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. For example, our approach achieves 28% compression of RoBERTa's embedding layer with only 0.63% additive drop in the accuracy (without fine-tuning) in average over all tasks in GLUE, compared to 11% drop using the existing ℓ_2-approximation. Open code is provided for reproducing and extending our results.
READ FULL TEXT