On Orderings of Probability Vectors and Unsupervised Performance Estimation

06/16/2023
by   Muhammad Maaz, et al.
0

Unsupervised performance estimation, or evaluating how well models perform on unlabeled data is a difficult task. Recently, a method was proposed by Garg et al. [2022] which performs much better than previous methods. Their method relies on having a score function, satisfying certain properties, to map probability vectors outputted by the classifier to the reals, but it is an open problem which score function is best. We explore this problem by first showing that their method fundamentally relies on the ordering induced by this score function. Thus, under monotone transformations of score functions, their method yields the same estimate. Next, we show that in the binary classification setting, nearly all common score functions - the L^∞ norm; the L^2 norm; negative entropy; and the L^2, L^1, and Jensen-Shannon distances to the uniform vector - all induce the same ordering over probability vectors. However, this does not hold for higher dimensional settings. We conduct numerous experiments on well-known NLP data sets and rigorously explore the performance of different score functions. We conclude that the L^∞ norm is the most appropriate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset