Adversarial attack on Speech-to-Text Recognition Models

01/26/2019
by   Xiaolei Liu, et al.
0

Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Nonetheless, the efficiency and robustness of existing works are not yet satisfactory due to the large search space of audio. In this paper, we introduce the first study of weighted-sampling audio adversarial examples, specifically focusing on the factor of the numbers and the positions of distortion to reduce the search space. Meanwhile, we propose a new attack scenario, audio injection attack, which offers some novel insights in the concealment of adversarial attack. Our experimental study shows that we can generate audio adversarial examples with low noise and high robustness at the minute level, compared to other hour-level state-of-the-art methods. [We encourage you to listen to these audio adversarial examples on this anonymous website.]

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset