The cognitive roots of regularization in language

03/09/2017
by   Vanessa Ferdinand, et al.
0

Regularization occurs when the output a learner produces is less variable than the linguistic data they observed. In an artificial language learning experiment, we show that there exist at least two independent sources of regularization bias in cognition: a domain-general source based on cognitive load and a domain-specific source triggered by linguistic stimuli. Both of these factors modulate how frequency information is encoded and produced, but only the production-side modulations result in regularization (i.e. cause learners to eliminate variation from the observed input). We formalize the definition of regularization as the reduction of entropy and find that entropy measures are better at identifying regularization behavior than frequency-based analyses. We also use a model of cultural transmission to extrapolate from our experimental data in order to predict the amount of regularization which would develop in each experimental condition if the artificial language was transmitted over several generations of learners. Here we find an interaction between cognitive load and linguistic domain, suggesting that the effect of cognitive constraints can become more complex when put into the context of cultural evolution: although learning biases certainly carry information about the course of language evolution, we should not expect a one-to-one correspondence between the micro-level processes that regularize linguistic datasets and the macro-level evolution of linguistic regularity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset