Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel k-means Clustering

05/15/2019
by   Manuel Fernandez, et al.
0

We present tight lower bounds on the number of kernel evaluations required to approximately solve kernel ridge regression (KRR) and kernel k-means clustering (KKMC) on n input points. For KRR, our bound for relative error approximation to the minimizer of the objective function is Ω(nd_eff^λ/ε) where d_eff^λ is the effective statistical dimension, which is tight up to a (d_eff^λ/ε) factor. For KKMC, our bound for finding a k-clustering achieving a relative error approximation of the objective function is Ω(nk/ε), which is tight up to a (k/ε) factor. Our KRR result resolves a variant of an open question of El Alaoui and Mahoney, asking whether the effective statistical dimension is a lower bound on the sampling complexity or not. Furthermore, for the important practical case when the input is a mixture of Gaussians, we provide a KKMC algorithm which bypasses the above lower bound.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset