Do RNN States Encode Abstract Phonological Processes?

04/01/2021
by   Miikka Silfverberg, et al.
0

Sequence-to-sequence models have delivered impressive results in word formation tasks such as morphological inflection, often learning to model subtle morphophonological details with limited training data. Despite the performance, the opacity of neural models makes it difficult to determine whether complex generalizations are learned, or whether a kind of separate rote memorization of each morphophonological process takes place. To investigate whether complex alternations are simply memorized or whether there is some level of generalization across related sound changes in a sequence-to-sequence model, we perform several experiments on Finnish consonant gradation – a complex set of sound changes triggered in some words by certain suffixes. We find that our models often – though not always – encode 17 different consonant gradation processes in a handful of dimensions in the RNN. We also show that by scaling the activations in these dimensions we can control whether consonant gradation occurs and the direction of the gradation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset