کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
411726 | 679589 | 2015 | 12 صفحه PDF | دانلود رایگان |

• The conditional redundancy between the candidate feature and the selected features is considered.
• The dependency between the candidate feature and all class labels is involved.
• A metric called max-dependency and min-redundancy is used to evaluate each feature.
• Extensive experimental results show that the proposed method is effective.
Multi-label learning deals with data belonging to different labels simultaneously. Like traditional supervised feature selection, multi-label feature selection also plays an important role in data mining, information retrieval, and machine learning. In this paper, we first consider the two factors of multi-label feature, feature dependency and feature redundancy. In particular, dependency implies the degree to which a candidate feature contributes to each label, and redundancy represents the information overlap between the candidate feature and the selected features under all labels. We then propose an evaluation measure that combines mutual information with a max-dependency and min-redundancy algorithm, which allows us to select superior feature subset for multi-label learning. Extensive experiments show that the proposed method can effectively select a good feature subset, and outperform some state-of-the-art approaches.
Journal: Neurocomputing - Volume 168, 30 November 2015, Pages 92–103