Article ID Journal Published Year Pages File Type
397714 International Journal of Approximate Reasoning 2013 20 Pages PDF
Abstract

Variable selection is an important problem for cluster analysis of high-dimensional data. It is also a difficult one. The difficulty originates not only from the lack of class information but also the fact that high-dimensional data are often multifaceted and can be meaningfully clustered in multiple ways. In such a case the effort to find one subset of attributes that presumably gives the “best” clustering may be misguided. It makes more sense to identify various facets of a data set (each being based on a subset of attributes), cluster the data along each one, and present the results to the domain experts for appraisal and selection. In this paper, we propose a generalization of the Gaussian mixture models and demonstrate its ability to automatically identify natural facets of data and cluster data along each of those facets simultaneously. We present empirical results to show that facet determination usually leads to better clustering results than variable selection.

► We propose a generalization of Gaussian mixture models to allow multiple clusterings. ► We compare the facet determination approach and the variable selection approach to model-based clustering. ► We demonstrate that facet determination usually leads to better clustering results than variable selection. ► Analysis on NBA data demonstrates the effectiveness of using PLTMs for facet determination.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,