Language Models in the Loop: Incorporating Prompting into Weak Supervision

05/04/2022
by   Ryan Smith, et al.
2

We propose a new strategy for applying large pre-trained language models to novel tasks when labeled training data is limited. Rather than apply the model in a typical zero-shot or few-shot fashion, we treat the model as the basis for labeling functions in a weak supervision framework. To create a classifier, we first prompt the model to answer multiple distinct queries about an example and define how the possible responses should be mapped to votes for labels and abstentions. We then denoise these noisy label sources using the Snorkel system and train an end classifier with the resulting training data. Our experimental evaluation shows that prompting large language models within a weak supervision framework can provide significant gains in accuracy. On the WRENCH weak supervision benchmark, this approach can significantly improve over zero-shot performance, an average 19.5 approach produces classifiers with comparable or superior accuracy to those trained from hand-engineered rules.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset