Article ID Journal Published Year Pages File Type
534778 Pattern Recognition Letters 2012 11 Pages PDF
Abstract

Though the k-nearest neighbor (k-NN) pattern classifier is an effective learning algorithm, it can result in large model sizes. To compensate, a number of variant algorithms have been developed that condense the model size of the k-NN classifier at the expense of accuracy. To increase the accuracy of these condensed models, we present a direct boosting algorithm for the k-NN classifier that creates an ensemble of models with locally modified distance weighting. An empirical study conducted on 10 standard databases from the UCI repository shows that this new Boosted k-NN algorithm has increased generalization accuracy in the majority of the datasets and never performs worse than standard k-NN.

► We incorporate boosting directly into the kNN algorithm. ► This provides a mechanism for naturally building ensembles of NN-based learners. ► The approach is shown to improve on traditional kNN in many instances (and to never be worse). ► The approach naturally makes kNN more robust to class imbalance. ► The approach can be practically scaled without adversely affecting performance.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, ,