The Fast Cauchy Transform and Faster Robust Linear Regression

07/19/2012
by   Kenneth L. Clarkson, et al.
0

We provide fast algorithms for overconstrained ℓ_p regression and related problems: for an n× d input matrix A and vector b∈R^n, in O(nd n) time we reduce the problem _x∈R^dAx-b_p to the same problem with input matrix à of dimension s × d and corresponding b̃ of dimension s× 1. Here, à and b̃ are a coreset for the problem, consisting of sampled and rescaled rows of A and b; and s is independent of n and polynomial in d. Our results improve on the best previous algorithms when n≫ d, for all p∈[1,∞) except p=2. We also provide a suite of improved results for finding well-conditioned bases via ellipsoidal rounding, illustrating tradeoffs between running time and conditioning quality, including a one-pass conditioning algorithm for general ℓ_p problems. We also provide an empirical evaluation of implementations of our algorithms for p=1, comparing them with related algorithms. Our empirical results show that, in the asymptotic regime, the theory is a very good guide to the practical performance of these algorithms. Our algorithms use our faster constructions of well-conditioned bases for ℓ_p spaces and, for p=1, a fast subspace embedding of independent interest that we call the Fast Cauchy Transform: a distribution over matrices Π:R^nR^O(d d), found obliviously to A, that approximately preserves the ℓ_1 norms: that is, with large probability, simultaneously for all x, Ax_1 ≈Π Ax_1, with distortion O(d^2+η), for an arbitrarily small constant η>0; and, moreover, Π A can be computed in O(nd d) time. The techniques underlying our Fast Cauchy Transform include fast Johnson-Lindenstrauss transforms, low-coherence matrices, and rescaling by Cauchy random variables.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset