Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

05/23/2022
by   Rishabh Bhardwaj, et al.
5

Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-scale pre-trained language models towards a downstream task. More recently, soft prompt tuning has aimed to learn a fixed set of task-specific continuous vectors, i.e., soft tokens that remain static across the task samples. However, a fixed prompt may not generalize well to the diverse kinds of inputs the task comprises. With this motivation, we propose a novel way of prompting, Vector-quantized Input-contextualized Prompt Tuning or VIP. Essentially, VIP focuses on two aspects i) input-adaptation: input-specific contextualization of the soft tokens; and ii) vector quantization: we pass the tokens through a quantizer which effectively reduces representation variance by sampling prompts from a compact latent space. Over a wide range of natural language understanding tasks (SuperGLUE, QA, Relation Classification, NER, NLI), our proposed VIP framework beats the PT model by a margin of 1.19%. Additionally, on Out-of-domain QA and Multi-Task setups over 4 different tasks spanning over 12 domains, we find that VIP outperforms PT by 0.75%.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset