Evaluating CNNs on the Gestalt Principle of Closure
Deep convolutional neural networks (CNNs) are widely known for their outstanding performance in classification and regression tasks over high-dimensional data. This made them a popular and powerful tool for a large variety of applications in industry and academia. Recent publications show that seemingly easy classifaction tasks (for humans) can be very challenging for state of the art CNNs. An attempt to describe how humans perceive visual elements is given by the Gestalt principles. In this paper we evaluate AlexNet and GoogLeNet regarding their performance on classifying the correctness of the well known Kanizsa triangles, which heavily rely on the Gestalt principle of closure. Therefore we created various datasets containing valid as well as invalid variants of the Kanizsa triangle. Our findings suggest that perceiving objects by utilizing the principle of closure is very challenging for the applied network architectures but they appear to adapt to the effect of closure.
READ FULL TEXT