Article ID Journal Published Year Pages File Type
525703 Computer Vision and Image Understanding 2013 13 Pages PDF
Abstract

In this paper we propose a novel biased random sampling strategy for image representation in Bag-of-Words models. We evaluate its impact on the feature properties and the ranking quality for a set of semantic concepts and show that it improves performance of classifiers in image annotation tasks and increases the correlation between kernels and labels. As second contribution we propose a method called Output Kernel Multi-Task Learning (MTL) to improve ranking performance by transfer information between classes. The main advantages of output kernel MTL are that it permits asymmetric information transfer between tasks and scales to training sets of several thousand images. We give a theoretical interpretation of the method and show that the learned contributions of source tasks to target tasks are semantically consistent. Both strategies are evaluated on the ImageCLEF PhotoAnnotation dataset.Our best visual result which used the MTL method was ranked first according to mean Average Precision (mAP) within the purely visual submissions in the ImageCLEF 2011 PhotoAnnotation Challenge. Our multi-modal submission achieved the first rank by mAP among all submissions in the same competition.

► Biased random sampling enhances ranking performance in image annotation. ► Biased random sampling increases kernel to labels mutual information on average. ► Mutual information gains from biased random sampling are semantically consistent. ► Output kernel multi-task learning enhances ranking performance in image annotation. ► Output kernel MTL learns semantically meaningful relations between concepts.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,