کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
379448 | 659302 | 2007 | 16 صفحه PDF | دانلود رایگان |

Various algorithms are capable of learning a set of classification rules from a number of observations with their corresponding class labels. Whereas the obtained rule set is usually evaluated by measuring its accuracy on a number of unseen examples, there are several other evaluation criteria, such as comprehensibility and consistency, that are often overlooked. In this paper we focus on the aspect of consistency: if a rule learner is applied several times on the same data set, will it provide rule sets that are similar over the different runs? A new measure is proposed and various examples show how this new measure can be used to decide between different algorithms and rule sets or to find out whether the rules in a knowledge base need to be updated.
Journal: Data & Knowledge Engineering - Volume 63, Issue 1, October 2007, Pages 167–182