Scaling in Depth: Unlocking Robustness Certification on ImageNet

01/29/2023
by   Kai Hu, et al.
0

Notwithstanding the promise of Lipschitz-based approaches to deterministically train and certify robust deep networks, the state-of-the-art results only make successful use of feed-forward Convolutional Networks (ConvNets) on low-dimensional data, e.g. CIFAR-10. Because ConvNets often suffer from vanishing gradients when going deep, large-scale datasets with many classes, e.g., ImageNet, have remained out of practical reach. This paper investigates ways to scale up certifiably robust training to Residual Networks (ResNets). First, we introduce the Linear ResNet (LiResNet) architecture, which utilizes a new residual block designed to facilitate tighter Lipschitz bounds compared to a conventional residual block. Second, we introduce Efficient Margin MAximization (EMMA), a loss function that stabilizes robust training by simultaneously penalizing worst-case adversarial examples from all classes. Combining LiResNet and EMMA, we achieve new state-of-the-art robust accuracy on CIFAR-10/100 and Tiny-ImageNet under ℓ_2-norm-bounded perturbations. Moreover, for the first time, we are able to scale up deterministic robustness guarantees to ImageNet, bringing hope to the possibility of applying deterministic certification to real-world applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset