Applying SoftTriple Loss for Supervised Language Model Fine Tuning

12/15/2021
by   Witold Sosnowski, et al.
0

We introduce a new loss function TripleEntropy, to improve classification performance for fine-tuning general knowledge pre-trained language models based on cross-entropy and SoftTriple loss. This loss function can improve the robust RoBERTa baseline model fine-tuned with cross-entropy loss by about (0.02 2.29 samples in the training dataset, the higher gain – thus, for small-sized dataset it is 0.78 extra-large 0.04

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset