A Saddle Point Approach to Structured Low-rank Matrix Learning
We propose a novel optimization approach for learning a low-rank matrix which is also constrained to be in a given linear subspace. Low-rank constraints are regularly employed in applications such as recommender systems and multi-task learning. In addition, several system identification problems require a learning matrix with both low-rank and linear subspace constraints. We model the classical nuclear norm regularized formulation as an equivalent saddle point minimax problem. This decouples the low-rank and the linear subspace constraints onto separate factors. Motivated by large-scale problems, we reformulate the minimax problem via a rank constrained non-convex surrogate. This translates into an optimization problem on the Riemannian spectrahedron manifold. We exploit the Riemannian structure to propose efficient first- and second-order algorithms. The duality theory allows to compute the duality gap for a candidate solution and our approach easily accommodates popular non-smooth loss functions, e.g., the absolute-value loss. We effortlessly scale on the Netflix data set on both matrix completion and robust matrix completion problems, obtaining state-of-the-art generalization performance. Additionally, we demonstrate the efficacy of our approach in Hankel matrix learning and multi-task learning problems.
READ FULL TEXT