Flip-Rotate-Pooling Convolution and Split Dropout on Convolution Neural Networks for Image Classification

07/31/2015
by   Fa Wu, et al.
0

This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs' performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs' performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring 1.18% (model of Zeiler and Fergus zeiler2013visualizing, 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset