Deep orthogonal linear networks are shallow

11/27/2020
by   Pierre Ablin, et al.
0

We consider the problem of training a deep orthogonal linear network, which consists of a product of orthogonal matrices, with no non-linearity in-between. We show that training the weights with Riemannian gradient descent is equivalent to training the whole factorization by gradient descent. This means that there is no effect of overparametrization and implicit bias at all in this setting: training such a deep, overparametrized, network is perfectly equivalent to training a one-layer shallow network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset