Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

10/20/2022
by   Giannis Daras, et al.
1

We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification for this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models: https://github.com/giannisdaras/multilingual_robustness

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset