Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples
Deep neural networks (DNNs) are shown to be susceptible to adversarial example attacks. Most existing works achieve this malicious objective by crafting subtle pixel-wise perturbations, and they are difficult to launch in the physical world due to inevitable transformations (e.g., different photographic distances and angles). Recently, there are a few research works on generating physical adversarial examples, but they generally require the details of the model a priori, which is often impractical. In this work, we propose a novel physical adversarial attack for arbitrary black-box DNN models, namely Region-Wise Attack. To be specific, we present how to efficiently search for region-wise perturbations to the inputs and determine their shapes, locations and colors via both top-down and bottom-up techniques. In addition, we introduce two fine-tuning techniques to further improve the robustness of our attack. Experimental results demonstrate the efficacy and robustness of the proposed Region-Wise Attack in real world.
READ FULL TEXT