Taylor's law for Human Linguistic Sequences
Taylor's law describes the fluctuation characteristics underlying a system in which the variance of an event within a time span grows by a power law with respect to the mean. Although Taylor's law has been applied in many natural and social systems, its application for language has been scarce. This article describes Taylor analysis of over 1100 texts across 14 languages. The Taylor exponents of natural language texts exhibit almost the same value. The exponent was also compared for other language-related data, such as the CHILDES corpus, music, and programming languages. The results show how the Taylor exponent serves to quantify the fundamental structural complexity underlying linguistic time series. The article also shows the applicability of these findings in evaluating language models. Specifically, a text generated by an LSTM unit exhibited a Taylor exponent of 0.50, identical to that of an i.i.d. process, thus showing a limitation of that neural model.
READ FULL TEXT