Biconvex Clustering
Convex clustering has recently garnered increasing interest due to its attractive theoretical and computational properties. While it confers many advantages over traditional clustering methods, its merits become limited in the face of high-dimensional data. In such settings, not only do Euclidean measures of fit appearing in the objective provide weaker discriminating power, but pairwise affinity terms that rely on k-nearest neighbors become poorly specified. We find that recent attempts which successfully address the former difficulty still suffer from the latter, in addition to incurring high computational cost and some numerical instability. To surmount these issues, we propose to modify the convex clustering objective so that feature weights are optimized jointly with the centroids. The resulting problem becomes biconvex and as such remains well-behaved statistically and algorithmically. In particular, we derive a fast algorithm with closed form updates and convergence guarantees, and establish finite-sample bounds on its prediction error that imply consistency. Our biconvex clustering method performs feature selection throughout the clustering task: as the learned weights change the effective feature representation, pairwise affinities can be updated adaptively across iterations rather than precomputed within a dubious feature space. We validate the contributions on real and simulated data, showing that our method effectively addresses the challenges of dimensionality while reducing dependence on carefully tuned heuristics typical of existing approaches.
READ FULL TEXT