From Sound Representation to Model Robustness

07/27/2020
by   Mohamad Esmaeilpour, et al.
0

In this paper, we demonstrate the extreme vulnerability of a residual deep neural network architecture (ResNet-18) against adversarial attacks in time-frequency representations of audio signals. We evaluate MFCC, short time Fourier transform (STFT), and discrete wavelet transform (DWT) to modulate environmental sound signals in 2D representation spaces. ResNet-18 not only outperforms other dense deep learning classifiers (i.e., GoogLeNet and AlexNet) in terms of recognition accuracy, but also it considerably transfers adversarial examples to other victim classifiers. On the balance of average budgets allocated by adversaries and the cost of the attack, we notice an inverse relationship between high recognition accuracy and model robustness against six strong adversarial attacks. We investigated this relationship to the three 2D representation domains, which are commonly used to represent audio signals, on three benchmarking environmental sound datasets. The experimental results have shown that while the ResNet-18 classifier trained on DWT spectrograms achieves the highest recognition accuracy, attacking this model is relatively more costly for the adversary compared to the MFCC and STFT representations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset