Sparse sketches with small inversion bias

11/21/2020
by   Michał Dereziński, et al.
0

For a tall n× d matrix A and a random m× n sketching matrix S, the sketched estimate of the inverse covariance matrix (A^⊤ A)^-1 is typically biased: E[(Ã^⊤Ã)^-1](A^⊤ A)^-1, where Ã=SA. This phenomenon, which we call inversion bias, arises, e.g., in statistics and distributed optimization, when averaging multiple independently constructed estimates of quantities that depend on the inverse covariance. We develop a framework for analyzing inversion bias, based on our proposed concept of an (ϵ,δ)-unbiased estimator for random matrices. We show that when the sketching matrix S is dense and has i.i.d. sub-gaussian entries, then after simple rescaling, the estimator (m/m-dÃ^⊤Ã)^-1 is (ϵ,δ)-unbiased for (A^⊤ A)^-1 with a sketch of size m=O(d+√(d)/ϵ). This implies that for m=O(d), the inversion bias of this estimator is O(1/√(d)), which is much smaller than the Θ(1) approximation error obtained as a consequence of the subspace embedding guarantee for sub-gaussian sketches. We then propose a new sketching technique, called LEverage Score Sparsified (LESS) embeddings, which uses ideas from both data-oblivious sparse embeddings as well as data-aware leverage-based row sampling methods, to get ϵ inversion bias for sketch size m=O(dlog d+√(d)/ϵ) in time O(nnz(A)log n+md^2), where nnz is the number of non-zeros. The key techniques enabling our analysis include an extension of a classical inequality of Bai and Silverstein for random quadratic forms, which we call the Restricted Bai-Silverstein inequality; and anti-concentration of the Binomial distribution via the Paley-Zygmund inequality, which we use to prove a lower bound showing that leverage score sampling sketches generally do not achieve small inversion bias.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset