Parallelized Computation and Backpropagation Under Angle-Parametrized Orthogonal Matrices

05/30/2021
by   Firas Hamze, et al.
0

We present a methodology for parallel acceleration of learning in the presence of matrix orthogonality and unitarity constraints of interest in several branches of machine learning. We show how an apparently sequential elementary rotation parametrization can be restructured into blocks of commutative operations using a well-known tool for coloring the edges of complete graphs, in turn widely applied to schedule round-robin (all-against-all) sports tournaments. The resulting decomposition admits an algorithm to compute a fully-parametrized orthogonal matrix from its rotation parameters in O(n) sequential steps and one to compute the gradient of a training loss with respect to its parameters in O(nlog n) steps. We discuss parametric restrictions of interest to generative modeling and present promising performance results with a prototype GPU implementation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset