Understanding Hard Negatives in Noise Contrastive Estimation

04/13/2021
by   Wenzheng Zhang, et al.
0

The choice of negative examples is important in noise contrastive estimation. Recent works find that hard negatives – highest-scoring incorrect examples under the model – are effective in practice, but they are used without a formal justification. We develop analytical tools to understand the role of hard negatives. Specifically, we view the contrastive loss as a biased estimator of the gradient of the cross-entropy loss, and show both theoretically and empirically that setting the negative distribution to be the model distribution results in bias reduction. We also derive a general form of the score function that unifies various architectures used in text retrieval. By combining hard negatives with appropriate score functions, we obtain strong results on the challenging task of zero-shot entity linking.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset