Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction

01/05/2022
by   Bowen Shi, et al.
0

Video recordings of speech contain correlated audio and visual information, providing a strong signal for speech representation learning from the speaker's lip movements and the produced sound. We introduce Audio-Visual Hidden Unit BERT (AV-HuBERT), a self-supervised representation learning framework for audio-visual speech, which masks multi-stream video input and predicts automatically discovered and iteratively refined multimodal hidden units. AV-HuBERT learns powerful audio-visual speech representation benefiting both lip-reading and automatic speech recognition. On the largest public lip-reading benchmark LRS3 (433 hours), AV-HuBERT achieves 32.5 labeled data, outperforming the former state-of-the-art approach (33.6 trained with a thousand times more transcribed video data (31K hours). The lip-reading WER is further reduced to 26.9 data from LRS3 and combined with self-training. Using our audio-visual representation on the same benchmark for audio-only speech recognition leads to a 40 2.3 https://github.com/facebookresearch/av_hubert

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset