A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces

12/08/2022
by   Charline Le Lan, et al.
0

Many machine learning problems encode their data as a matrix with a possibly very large number of rows and columns. In several applications like neuroscience, image compression or deep reinforcement learning, the principal subspace of such a matrix provides a useful, low-dimensional representation of individual data. Here, we are interested in determining the d-dimensional principal subspace of a given matrix from sample entries, i.e. from small random submatrices. Although a number of sample-based methods exist for this problem (e.g. Oja's rule <cit.>), these assume access to full columns of the matrix or particular matrix structure such as symmetry and cannot be combined as-is with neural networks <cit.>. In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can be scaled to datasets with an effectively infinite number of rows and columns. Our method consists in defining a loss function whose minimizer is the desired principal subspace, and constructing a gradient estimate of this loss whose bias can be controlled. We complement our theoretical analysis with a series of experiments on synthetic matrices, the MNIST dataset <cit.> and the reinforcement learning domain PuddleWorld <cit.> demonstrating the usefulness of our approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset