Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples
Although deep learning has shown great success in recent years, researchers have discovered a critical flaw where small, imperceptible changes in the input to the system can drastically change the output classification. These attacks are exploitable in nearly all of the existing deep learning classification frameworks. However, the susceptibility of deep sparse coding models to adversarial examples has not been examined. Here, we show that classifiers based on a deep sparse coding model whose classification accuracy is competitive with a variety of deep neural network models are robust to adversarial examples that effectively fool those same deep learning models. We demonstrate both quantitatively and qualitatively that the robustness of deep sparse coding models to adversarial examples arises from two key properties. First, because deep sparse coding models learn general features corresponding to generators of the dataset as a whole, rather than highly discriminative features for distinguishing specific classes, the resulting classifiers are less dependent on idiosyncratic features than might be more easily exploited. Second, because deep sparse coding models utilize fixed point attractor dynamics with top-down feedback, it is more difficult to find small changes to the input that drive the resulting representations out of the correct attractor basin.
READ FULL TEXT