On Compressing U-net Using Knowledge Distillation

12/01/2018
by   Karttikeya Mangalam, et al.
0

We study the use of knowledge distillation to compress the U-net architecture. We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process. This allows us to compress a U-net by over 1000x, i.e., to 0.1 negligible decrease in performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset