کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
535477 | 870349 | 2008 | 11 صفحه PDF | دانلود رایگان |

Statistical pattern classification techniques have been successfully applied to many practical classification problems. In real-world applications, the challenge is often to cope with patterns that lead to unreliable classification decisions. These situations occur either due to unexpected patterns, i.e., patterns which occur in the regions far from the training data or due to patterns which occur in the overlap region of classes. This paper proposes a method for estimating the reliability of a classifier to cope with these situations. While existing methods for quantifying the reliability are often solely based on the class membership probability estimated on global approximations, in this paper, the reliability is quantified in terms of a confidence interval on the class membership probability. The size of the confidence interval is calculated explicitly based on the local density of training data in the neighborhood of a test pattern. A synthetic example is given to illustrate the various aspects of the proposed approach. In addition, experimental evaluation on real data sets is conducted to demonstrate the effectiveness of the proposed approach to detect unexpected patterns. The lower bound of the confidence interval is used to detect the unexpected patterns. By comparing the performance with the state-of-the-art methods, we show our approach is well-founded.
Journal: Pattern Recognition Letters - Volume 29, Issue 3, 1 February 2008, Pages 243–253