Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

11/18/2022
by   Yeshwanth Cherapanamjeri, et al.
0

We study fundamental problems in linear algebra, such as finding a maximal linearly independent subset of rows or columns (a basis), solving linear regression, or computing a subspace embedding. For these problems, we consider input matrices 𝐀∈ℝ^n× d with n > d. The input can be read in nnz(𝐀) time, which denotes the number of nonzero entries of 𝐀. In this paper, we show that beyond the time required to read the input matrix, these fundamental linear algebra problems can be solved in d^ω time, i.e., where ω≈ 2.37 is the current matrix-multiplication exponent. To do so, we introduce a constant-factor subspace embedding with the optimal m=𝒪(d) number of rows, and which can be applied in time 𝒪(nnz(𝐀)/α) + d^2 + αpoly(log d) for any trade-off parameter α>0, tightening a recent result by Chepurko et. al. [SODA 2022] that achieves an exp(poly(loglog n)) distortion with m=d·poly(loglog d) rows in 𝒪(nnz(𝐀)/α+d^2+α+o(1)) time. Our subspace embedding uses a recently shown property of stacked Subsampled Randomized Hadamard Transforms (SRHT), which actually increase the input dimension, to "spread" the mass of an input vector among a large number of coordinates, followed by random sampling. To control the effects of random sampling, we use fast semidefinite programming to reweight the rows. We then use our constant-factor subspace embedding to give the first optimal runtime algorithms for finding a maximal linearly independent subset of columns, regression, and leverage score sampling. To do so, we also introduce a novel subroutine that iteratively grows a set of independent rows, which may be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset