How Hard Is Robust Mean Estimation?

03/19/2019
by   Samuel B. Hopkins, et al.
0

Robust mean estimation is the problem of estimating the mean μ∈R^d of a d-dimensional distribution D from a list of independent samples, an ϵ-fraction of which have been arbitrarily corrupted by a malicious adversary. Recent algorithmic progress has resulted in the first polynomial-time algorithms which achieve dimension-independent rates of error: for instance, if D has covariance I, in polynomial-time one may find μ̂ with μ - μ̂≤ O(√(ϵ)). However, error rates achieved by current polynomial-time algorithms, while dimension-independent, are sub-optimal in many natural settings, such as when D is sub-Gaussian, or has bounded 4-th moments. In this work we give worst-case complexity-theoretic evidence that improving on the error rates of current polynomial-time algorithms for robust mean estimation may be computationally intractable in natural settings. We show that several natural approaches to improving error rates of current polynomial-time robust mean estimation algorithms would imply efficient algorithms for the small-set expansion problem, refuting Raghavendra and Steurer's small-set expansion hypothesis (so long as P ≠ NP). We also give the first direct reduction to the robust mean estimation problem, starting from a plausible but nonstandard variant of the small-set expansion problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset