Article ID Journal Published Year Pages File Type
7315496 Cortex 2014 12 Pages PDF
Abstract
We review some recent progress on the characterisation of long-range patterns of word use in language using methods from information theory. In particular, two levels of structure in language are considered. The first level corresponds to the patterns of words usage over different contextual domains. A direct application of information theory to quantify the specificity of words across different sections of a linguistic sequence leads to a measure of semantic information. Moreover, a natural scale emerges that characterises the typical size of semantic structures. Since the information measure is made up of additive contributions from individual words, it is possible to rank the words according to their overall weight in the total information. This allows the extraction of keywords most relevant to the semantic content of the sequence without any prior knowledge of the language. The second level considered is the complex structure of correlations among words in linguistic sequences. The degree of order in language can be quantified by means of the entropy. Reliable estimates of the entropy were obtained from corpora of texts from several linguistic families by means of lossless compression algorithms. The value of the entropy fluctuates across different languages since it depends on linguistic organisation at various levels. However, when a measure of relative entropy that specifically quantifies the degree of word ordering in language is estimated, it presents an almost constant value over all the linguistic families studied. This suggests that the entropy of word ordering is a novel quantitative linguistic universal.
Related Topics
Life Sciences Neuroscience Behavioral Neuroscience
Authors
,