Article ID Journal Published Year Pages File Type
534701 Pattern Recognition Letters 2011 8 Pages PDF
Abstract

This paper presents an empirical evaluation on the methods of reducing the dimensionality of dissimilarity spaces for optimizing dissimilarity-based classifications (DBCs). One problem of DBCs is the high dimensionality of the dissimilarity spaces. To address this problem, two kinds of solutions have been proposed in the literature: prototype selection (PS) based methods and dimension reduction (DR) based methods. Although PS-based and DR-based methods have been explored separately by many researchers, not much analysis has been done on the study of comparing the two. Therefore, this paper aims to find a suitable method for optimizing DBCs by a comparative study. Our empirical evaluation, obtained with the two approaches for an artificial and three real-life benchmark databases, demonstrates that DR-based methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA) based methods, generally improve the classification accuracies more than PS-based methods. Especially, the experimental results demonstrate that PCA is more useful for the well-represented data sets, while LDA is more helpful for the small sample size problems.

Research highlights► Prototype selection (PS)/dimension reduction (DR) based methods have been compared. ► Two methods have been evaluated for an artificial, three real-life benchmark data. ► Generally DR-based methods excel PS-based ones in terms of classification accuracy.► PCA based method is more useful for the well-represented data sets. ► LDA based method works better for the SSS problems.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
,