A note on Linear Bottleneck networks and their Transition to Multilinearity
Randomly initialized wide neural networks transition to linear functions of weights as the width grows, in a ball of radius O(1) around initialization. A necessary condition for this result is that all layers of the network are wide enough, i.e., all widths tend to infinity. However, the transition to linearity breaks down when this infinite width assumption is violated. In this work we show that linear networks with a bottleneck layer learn bilinear functions of the weights, in a ball of radius O(1) around initialization. In general, for B-1 bottleneck layers, the network is a degree B multilinear function of weights. Importantly, the degree only depends on the number of bottlenecks and not the total depth of the network.
READ FULL TEXT