کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
408237 | 679014 | 2016 | 9 صفحه PDF | دانلود رایگان |
One of the crucial problems of the classifier ensemble is the so-called combination rule which is responsible for establishing a single decision from the pool of predictors. The final decision is made on the basis of the outputs of individual classifiers. At the same time, some of the individuals do not contribute much to the collective decision and may be discarded. This paper discusses how to design an effective combination rule, based on support functions returned by individual classifiers. We express our interest in aggregation methods which do not require training, because in many real-life problems we do not have an abundance of training objects or we are working under time constraints. Additionally, we show how to use proposed operators for simultaneous classifier combination and ensemble pruning. Our proposed schemes have embedded classifier selection step, which is based on weight thresholding. The experimental analysis carried out on the set of benchmark datasets and backed up with a statistical analysis, proved the usefulness of the proposed method, especially when the number of class labels is high.
Journal: Neurocomputing - Volume 196, 5 July 2016, Pages 14–22