Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation

06/16/2021
by   Ilias Diakonikolas, et al.
2

We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset. Specifically, we are given a set T of n points in ℝ^d and a parameter 0< α <1/2 such that an α-fraction of the points in T are i.i.d. samples from a well-behaved distribution 𝒟 and the remaining (1-α)-fraction of the points are arbitrary. The goal is to output a small list of vectors at least one of which is close to the mean of 𝒟. As our main contribution, we develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees, with running time n^1 + o(1) d. All prior algorithms for this problem had additional polynomial factors in 1/α. As a corollary, we obtain the first almost-linear time algorithms for clustering mixtures of k separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods. Prior clustering algorithms inherently relied on an application of k-PCA, thereby incurring runtimes of Ω(n d k). This marks the first runtime improvement for this basic statistical problem in nearly two decades. The starting point of our approach is a novel and simpler near-linear time robust mean estimation algorithm in the α→ 1 regime, based on a one-shot matrix multiplicative weights-inspired potential decrease. We crucially leverage this new algorithmic framework in the context of the iterative multi-filtering technique of Diakonikolas et. al. '18, '20, providing a method to simultaneously cluster and downsample points using one-dimensional projections – thus, bypassing the k-PCA subroutines required by prior algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset