Towards Efficient and Elastic Visual Question Answering with Doubly Slimmable Transformer

03/24/2022
by   Zhou Yu, et al.
0

Transformer-based approaches have shown great success in visual question answering (VQA). However, they usually require deep and wide models to guarantee good performance, making it difficult to deploy on capacity-restricted platforms. It is a challenging yet valuable task to design an elastic VQA model that supports adaptive pruning at runtime to meet the efficiency constraints of diverse platforms. In this paper, we present the Doubly Slimmable Transformer (DST), a general framework that can be seamlessly integrated into arbitrary Transformer-based VQA models to train one single model once and obtain various slimmed submodels of different widths and depths. Taking two typical Transformer-based VQA approaches, i.e., MCAN and UNITER, as the reference models, the obtained slimmable MCAN_DST and UNITER_DST models outperform the state-of-the-art methods trained independently on two benchmark datasets. In particular, one slimmed MCAN_DST submodel achieves a comparable accuracy on VQA-v2, while being 0.38x smaller in model size and having 0.27x fewer FLOPs than the reference MCAN model. The smallest MCAN_DST submodel has 9M parameters and 0.16G FLOPs in the inference stage, making it possible to be deployed on edge devices.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset