Rethinking and Reweighting the Univariate Losses for Multi-Label Ranking: Consistency and Generalization

05/10/2021
by   Guoqiang Wu, et al.
0

(Partial) ranking loss is a commonly used evaluation measure for multi-label classification, which is usually optimized with convex surrogates for computational efficiency. Prior theoretical work on multi-label ranking mainly focuses on (Fisher) consistency analyses. However, there is a gap between existing theory and practice – some pairwise losses can lead to promising performance but lack consistency, while some univariate losses are consistent but usually have no clear superiority in practice. In this paper, we attempt to fill this gap through a systematic study from two complementary perspectives of consistency and generalization error bounds of learning algorithms. Our results show that learning algorithms with the consistent univariate loss have an error bound of O(c) (c is the number of labels), while algorithms with the inconsistent pairwise loss depend on O(√(c)) as shown in prior work. This explains that the latter can achieve better performance than the former in practice. Moreover, we present an inconsistent reweighted univariate loss-based learning algorithm that enjoys an error bound of O(√(c)) for promising performance as well as the computational efficiency of univariate losses. Finally, experimental results validate our theoretical analyses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset