کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
409886 | 679101 | 2015 | 8 صفحه PDF | دانلود رایگان |
• We combine visual and text features for medical image modality classification.
• ℓp–norm multiple kernel learning is used for combining different features, and compared to other feature combination methods.
• One-vs-All approach is used for the multi-class problem.
• ℓpℓp-norm MKL outperforms other simple kernel combination methods to combine visual and textual features for modality classification.
Automatic modality classification of medical images is an important tool for medical image retrieval. In this paper, we combine visual and textual information for modality classification. The visual features used are SIFT feature, LBP feature, Gabor texture feature and Tamura texture feature. And the textual feature is a tf–idf feature vector drawn from image description text. We combine these features by ℓp-normℓp-norm multiple kernel learning (ℓp-normℓp-norm MKL), and use One-vs-All approach for this multi-class problem. ℓp-normℓp-norm MKL is explored with different norm value (p≥1p≥1). These MKL based methods are compared with several other feature combination methods and evaluated on the dataset of modality classification task in ImageCLEFmed 2010. The experimental results indicate that multiple kernel learning is a promising approach to combine visual and textual features for modality classification, and outperforms other simple kernel combination methods and the traditional early fusion method.
Journal: Neurocomputing - Volume 147, 5 January 2015, Pages 387–394