Classification of sparse binary vectors

03/28/2019
by   Evgenii Chzhen, et al.
0

In this work we consider a problem of multi-label classification, where each instance is associated with some binary vector. Our focus is to find a classifier which minimizes false negative discoveries under constraints. Depending on the considered set of constraints we propose plug-in methods and provide non-asymptotic analysis under margin type assumptions. Specifically, we analyze two particular examples of constraints that promote sparse predictions: in the first one, we focus on classifiers with ℓ_0-type constraints and in the second one, we address classifiers with bounded false positive discoveries. Both formulations lead to different Bayes rules and, thus, different plug-in approaches. The first considered scenario is the popular multi-label top-K procedure: a label is predicted to be relevant if its score is among the K largest ones. For this case, we provide an excess risk bound that achieves so called `fast' rates of convergence under a generalization of the margin assumption to this settings. The second scenario differs significantly from the top-K settings, as the constraints are distribution dependent. We demonstrate that in this scenario the almost sure control of false positive discoveries is impossible without extra assumptions. To alleviate this issue we propose a sufficient condition for the consistent estimation and provide non-asymptotic upper-bound.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset