On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks

07/02/2020
by   Benjamin Lengerich, et al.
0

We examine Dropout through the perspective of interactions: learned effects that combine multiple input variables. Given N variables, there are O(N^2) possible pairwise interactions, O(N^3) possible 3-way interactions, etc. We show that Dropout implicitly sets a learning rate for interaction effects that decays exponentially with the size of the interaction, corresponding to a regularizer that balances against the hypothesis space which grows exponentially with number of variables in the interaction. This understanding of Dropout has implications for the optimal Dropout rate: higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions. This perspective also issues caution against using Dropout to measure term saliency because Dropout regularizes against terms for high-order interactions. Finally, this view of Dropout as a regularizer of interaction effects provides insight into the varying effectiveness of Dropout for different architectures and data sets. We also compare Dropout to regularization via weight decay and early stopping and find that it is difficult to obtain the same regularization effect for high-order interactions with these methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset