Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length

11/18/2021
by   Shira Guskin, et al.
0

Limited computational budgets often prevent transformers from being used in production and from having their high accuracy utilized. TinyBERT addresses the computational efficiency by self-distilling BERT into a smaller transformer representation having fewer layers and smaller internal embedding. However, TinyBERT's performance drops when we reduce the number of layers by 50 drops even more abruptly when we reduce the number of layers by 75 advanced NLP tasks such as span question answering. Additionally, a separate model must be trained for each inference scenario with its distinct computational budget. In this work we present Dynamic-TinyBERT, a TinyBERT model that utilizes sequence-length reduction and Hyperparameter Optimization for enhanced inference efficiency per any computational budget. Dynamic-TinyBERT is trained only once, performing on-par with BERT and achieving an accuracy-speedup trade-off superior to any other efficient approaches (up to 3.3x with <1 reproduce our work will be open-sourced.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset