An Experimental Study on Private Aggregation of Teacher Ensemble Learning for End-to-End Speech Recognition

10/11/2022
by   Chao-Han Huck Yang, et al.
0

Differential privacy (DP) is one data protection avenue to safeguard user information used for training deep models by imposing noisy distortion on privacy data. Such a noise perturbation often results in a severe performance degradation in automatic speech recognition (ASR) in order to meet a privacy budget ε. Private aggregation of teacher ensemble (PATE) utilizes ensemble probabilities to improve ASR accuracy when dealing with the noise effects controlled by small values of ε. We extend PATE learning to work with dynamic patterns, namely speech utterances, and perform a first experimental demonstration that it prevents acoustic data leakage in ASR training. We evaluate three end-to-end deep models, including LAS, hybrid CTC/attention, and RNN transducer, on the open-source LibriSpeech and TIMIT corpora. PATE learning-enhanced ASR models outperform the benchmark DP-SGD mechanisms, especially under strict DP budgets, giving relative word error rate reductions between 26.2 LibriSpeech. We also introduce a DP-preserving ASR solution for pretraining on public speech corpora.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset