Learning from Multiple Complementary Labels

12/30/2019
by   Lei Feng, et al.
0

Complementary-label learning is a new weakly-supervised learning framework that solves the problem where each training example is supplied with a complementary label, which only specifies one of the classes that the example does not belong to. Although a few works have demonstrated that an unbiased estimator of the original classification risk can be obtained from only complementarily labeled data, they are all restricted to the case where each example is associated with exactly one complementary label. It would be more promising to learn from multiple complementary labels simultaneously, as the supervision information would be richer if more complementary labels are provided. So far, whether there exists an unbiased risk estimator for learning from multiple complementary labels simultaneously is still unknown. In this paper, we will give an affirmative answer by deriving the first unbiased risk estimator for learning from multiple complementary labels. In addition, we further theoretically analyze the estimation error bound of our proposed approach, and show that the optimal parametric convergence rate is achieved. Finally, we experimentally demonstrate the effectiveness of the proposed approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset