Wide Neural Networks with Bottlenecks are Deep Gaussian Processes

01/03/2020
by   Devanshu Agrawal, et al.
0

There is recently much work on the "wide limit" of neural networks, where Bayesian neural networks (BNNs) are shown to converge to a Gaussian process (GP) as all hidden layers are sent to infinite width. However, these results do not apply to architectures that require one or more of the hidden layers to remain narrow. In this paper, we consider the wide limit of BNNs where some hidden layers, called "bottlenecks", are held at finite width. The result is a composition of GPs that we term a "bottleneck neural network Gaussian process" (bottleneck NNGP). Although intuitive, the subtlety of the proof is in showing that the wide limit of a composition of networks is in fact the composition of the limiting GPs. We also analyze theoretically a single-bottleneck NNGP, finding that the bottleneck induces dependence between the outputs of a multi-output network that persists through infinite post-bottleneck depth, and prevents the kernel of the network from losing discriminative power at infinite post-bottleneck depth.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset