Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
410639 | Neurocomputing | 2009 | 7 Pages |
Abstract
Topic models have been successfully used in information classification and retrieval. These models can capture word correlations in a collection of textual documents with a low-dimensional set of multinomial distribution, called “topics”. However, it is important but difficult to select the appropriate number of topics for a specific dataset. In this paper, we study the inherent connection between the best topic structure and the distances among topics in Latent Dirichlet allocation (LDA), and propose a method of adaptively selecting the best LDA model based on density. Experiments show that the proposed method can achieve performance matching the best of LDA without manually tuning the number of topics.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Juan Cao, Tian Xia, Jintao Li, Yongdong Zhang, Sheng Tang,