Run Procrustes, Run! On the convergence of accelerated Procrustes Flow

06/01/2018
by   Anastasios Kyrillidis, et al.
0

In this work, we present theoretical results on the convergence of non-convex accelerated gradient descent in matrix factorization models. The technique is applied to matrix sensing problems with squared loss, for the estimation of a rank r optimal solution X^∈R^n × n. We show that the acceleration leads to linear convergence rate, even under non-convex settings where the variable X is represented as U U^ for U ∈R^n × r. Our result has the same dependence on the condition number of the objective --and the optimal solution-- as that of the recent results on non-accelerated algorithms. However, acceleration is observed in practice, both in synthetic examples and in two real applications: neuronal multi-unit activities recovery from single electrode recordings, and quantum state tomography on quantum computing simulators.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset