Article ID Journal Published Year Pages File Type
534786 Pattern Recognition Letters 2008 7 Pages PDF
Abstract

The medical automatic annotation task issued by the cross language evaluation forum (CLEF) aims at a fair comparison of state-of-the art algorithms for medical content-based image retrieval (CBIR). The contribution of this work is twofold: at first, a logical decomposition of the CBIR task is presented, and key elements to support the relevant steps are identified: (i) implementation of algorithms for feature extraction, feature comparison, and classifier combination, (ii) visualization of extracted features and retrieval results, (iii) generic evaluation of retrieval algorithms, and (iv) optimization of the parameters for the retrieval algorithms and their combination. Data structures and tools to address these key elements are integrated into an existing framework for image retrieval in medical applications (IRMA). Secondly, baseline results for the CLEF annotation tasks 2005–2007 are provided applying the IRMA framework, where global features and corresponding distance measures are combined within a nearest neighbor approach. Using identical classifier parameters and combination weights for each year shows that the task difficulty decreases over the years. The declining rank of the baseline submission also indicates the overall advances in CBIR concepts. Furthermore, a rough comparison between participants who submitted in only one of the years becomes possible.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,