Article ID Journal Published Year Pages File Type
558439 Computer Speech & Language 2012 26 Pages PDF
Abstract

We present a new generative model of natural language, the latent words language model. This model uses a latent variable for every word in a text that represents synonyms or related words in the given context. We develop novel methods to train this model and to find the expected value of these latent variables for a given unseen text. The learned word similarities help to reduce the sparseness problems of traditional n-gram language models. We show that the model significantly outperforms interpolated Kneser–Ney smoothing and class-based language models on three different corpora. Furthermore the latent variables are useful features for information extraction. We show that both for semantic role labeling and word sense disambiguation, the performance of a supervised classifier increases when incorporating these variables as extra features. This improvement is especially large when using only a small annotated corpus for training.

► We propose a novel generative model for learning synonyms and semantically related words from texts. ► The model improves words sense disambiguation. ► The model reduces the need for supervision in information extraction tasks.

Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, , ,