Looking Enhances Listening: Recovering Missing Speech Using Images

02/13/2020
by   Tejas Srinivasan, et al.
0

Speech is understood better by using visual context; for this reason, there have been many attempts to use images to adapt automatic speech recognition (ASR) systems. Current work, however, has shown that visually adapted ASR models only use images as a regularization signal, while completely ignoring their semantic content. In this paper, we present a set of experiments where we show the utility of the visual modality under noisy conditions. Our results show that multimodal ASR models can recover words which are masked in the input acoustic signal, by grounding its transcriptions using the visual representations. We observe that integrating visual context can result in up to 35 that end-to-end multimodal ASR systems can become more robust to noise by leveraging the visual context.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset