Speaker-Targeted Audio-Visual Models for Speech Recognition in Cocktail-Party Environments

06/13/2019
by   Guan-Lin Chao, et al.
0

Speech recognition in cocktail-party environments remains a significant challenge for state-of-the-art speech recognition systems, as it is extremely difficult to extract an acoustic signal of an individual speaker from a background of overlapping speech with similar frequency and temporal characteristics. We propose the use of speaker-targeted acoustic and audio-visual models for this task. We complement the acoustic features in a hybrid DNN-HMM model with information of the target speaker's identity as well as visual features from the mouth region of the target speaker. Experimentation was performed using simulated cocktail-party data generated from the GRID audio-visual corpus by overlapping two speakers's speech on a single acoustic channel. Our audio-only baseline achieved a WER of 26.3 model improved the WER to 4.4 even more pronounced effect, improving the WER to 3.6 approaches, however, did not significantly improve performance further. Our work demonstrates that speaker-targeted models can significantly improve the speech recognition in cocktail party environments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset