Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

02/16/2018
by   Peter L. Bartlett, et al.
0

We analyze algorithms for approximating a function f(x) = Φ x mapping ^d to ^d using deep linear neural networks, i.e. that learn a function h parameterized by matrices Θ_1,...,Θ_L and defined by h(x) = Θ_L Θ_L-1 ... Θ_1 x. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the optimum, in the case where the initial hypothesis Θ_1 = ... = Θ_L = I has loss bounded by a small enough constant. On the other hand, we show that gradient descent fails to converge for Φ whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If Φ is symmetric positive definite, we show that an algorithm that initializes Θ_i = I learns an ϵ-approximation of f using a number of updates polynomial in L, the condition number of Φ, and (d/ϵ). In contrast, we show that if the target Φ is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that Φ satisfies u^Φ u > 0 for all u, but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant u^Θ_L Θ_L-1 ... Θ_1 u > 0 for all u, and another that "balances" Θ_1 ... Θ_L so that they have the same singular values.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset