Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
384480 | Expert Systems with Applications | 2012 | 12 Pages |
Mutual information (MI) is used in feature selection to evaluate two key-properties of optimal features, the relevance of a feature to the class variable and the redundancy of similar features. Conditional mutual information (CMI), i.e., MI of the candidate feature to the class variable conditioning on the features already selected, is a natural extension of MI but not so far applied due to estimation complications for high dimensional distributions. We propose the nearest neighbor estimate of CMI, appropriate for high-dimensional variables, and build an iterative scheme for sequential feature selection with a termination criterion, called CMINN. We show that CMINN is equivalent to feature selection MI filters, such as mRMR and MaxiMin, in the presence of solely single feature effects, and more appropriate for combined feature effects. We compare CMINN to mRMR and MaxiMin on simulated datasets involving combined effects and confirm the superiority of CMINN in selecting the correct features (indicated also by the termination criterion) and giving best classification accuracy. The application to ten benchmark databases shows that CMINN obtains the same or higher classification accuracy compared to mRMR and MaxiMin at a smaller cardinality of the selected feature subset.
► The proposed nearest neighbor (NN) estimation of conditional mutual information (CMI) is appropriate for high-dimensional variables, as required in feature selection filters. ► The proposed feature selection filter CMINN can handle combined feature effects better than other mutual information (MI) filters, such as mRMR and MaxiMin. ► The proposed CMINN selects the most relevant and less redundant features first achieving high classification accuracy at a small cardinality of the feature subset. This is justified also by the application to benchmark databases.