Multi-way Encoding for Robustness
Deep models are state-of-the-art for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how one-hot encoding directly contributes to this vulnerability and propose breaking away from this widely-used, but highly-vulnerable mapping. We demonstrate that by leveraging a different output encoding, multi-way encoding, we decorrelate source and target models, making target models more secure. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks of the target model. We present robustness for black-box and white-box attacks on four benchmark datasets. The strength of our approach is also presented in the form of an attack for model watermarking by decorrelating a target model from a source model.
READ FULL TEXT