Sublinear Time Numerical Linear Algebra for Structured Matrices
We show how to solve a number of problems in numerical linear algebra, such as least squares regression, ℓ_p-regression for any p ≥ 1, low rank approximation, and kernel regression, in time T(A) (log(nd)), where for a given input matrix A ∈R^n × d, T(A) is the time needed to compute A· y for an arbitrary vector y ∈R^d. Since T(A) ≤ O((A)), where (A) denotes the number of non-zero entries of A, the time is no worse, up to polylogarithmic factors, as all of the recent advances for such problems that run in input-sparsity time. However, for many applications, T(A) can be much smaller than (A), yielding significantly sublinear time algorithms. For example, in the overconstrained (1+ϵ)-approximate polynomial interpolation problem, A is a Vandermonde matrix and T(A) = O(n log n); in this case our running time is n ·(log n) + (d/ϵ) and we recover the results of <cit.> as a special case. For overconstrained autoregression, which is a common problem arising in dynamical systems, T(A) = O(n log n), and we immediately obtain n ·(log n) + (d/ϵ) time. For kernel autoregression, we significantly improve the running time of prior algorithms for general kernels. For the important case of autoregression with the polynomial kernel and arbitrary target vector b∈R^n, we obtain even faster algorithms. Our algorithms show that, perhaps surprisingly, most of these optimization problems do not require much more time than that of a polylogarithmic number of matrix-vector multiplications.
READ FULL TEXT