Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness
As a certified defensive technique, randomized smoothing has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions remain unanswered, such as (i) whether the Gaussian mechanism is an appropriate option for certifying ℓ_2-norm robustness, and (ii) whether there is an appropriate randomized mechanism to certify ℓ_∞-norm robustness on high-dimensional datasets. To shed light on these questions, we introduce a generic framework that connects the existing frameworks to assess randomized mechanisms. Under our framework, we define the magnitude of the noise required by a mechanism to certify a certain extent of robustness as the metric for assessing the appropriateness of the mechanism. We also derive lower bounds on the metric as the criteria for assessment. Assessment of Gaussian and Exponential mechanisms is achieved by comparing the magnitude of noise needed by these mechanisms and the criteria, and we conclude that the Gaussian mechanism is an appropriate option to certify both ℓ_2-norm and ℓ_∞-norm robustness. The veracity of our framework is verified by evaluations on CIFAR10 and ImageNet.
READ FULL TEXT