k-meansNet: When k-means Meets Differentiable Programming

08/22/2018
by   Xi Peng, et al.
0

In this paper, we study how to make clustering benefiting from differentiable programming whose basic idea is treating the neural network as a language instead of a machine learning method. To this end, we recast the vanilla k-means as a novel feedforward neural network in an elegant way. Our contribution is two-fold. On the one hand, the proposed k-meansNet is a neural network implementation of the vanilla k-means, which enjoys four advantages highly desired, i.e., robustness to initialization, fast inference speed, the capability of handling new coming data, and provable convergence. On the other hand, this work may provide novel insights into differentiable programming. More specifically, most existing differentiable programming works unroll an optimizer as a recurrent neural network, namely, the neural network is employed to solve an existing optimization problem. In contrast, we reformulate the objective function of k-means as a feedforward neural network, namely, we employ the neural network to describe a problem. In such a way, we advance the boundary of differentiable programming by treating the neural network as from an alternative optimization approach to the problem formulation. Extensive experimental studies show that our method achieves promising performance comparing with 12 clustering methods on some challenging datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset