Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
379448 | Data & Knowledge Engineering | 2007 | 16 Pages |
Various algorithms are capable of learning a set of classification rules from a number of observations with their corresponding class labels. Whereas the obtained rule set is usually evaluated by measuring its accuracy on a number of unseen examples, there are several other evaluation criteria, such as comprehensibility and consistency, that are often overlooked. In this paper we focus on the aspect of consistency: if a rule learner is applied several times on the same data set, will it provide rule sets that are similar over the different runs? A new measure is proposed and various examples show how this new measure can be used to decide between different algorithms and rule sets or to find out whether the rules in a knowledge base need to be updated.