Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6855000 | Expert Systems with Applications | 2018 | 41 Pages |
Abstract
Social image annotation, which aims at inferring a set of semantic concepts for a social image, is an effective and straightforward way to facilitate social image search. Conventional approaches mainly demonstrated on adopting the visual features and tags, without considering other types of metadata. How to enhance the accuracy of social image annotation by fully exploiting multi-modal features is still an opening and challenging problem. In this paper, we propose an improved Multi-Modal Data Fusion based Latent Dirichlet Allocation (LDA) topic model (MMDF-LDA) to annotate social images via fusing visual content, user-supplied tags, user comments, and geographic information. When MMDF-LDA samples annotations for one data modality, all the other data modalities are exploited. In MMDF-LDA, geographical topics are generated from GPS locations of social images, and annotations have different probability to be used in different geographical regions. A social image is divided into several patches in advance, and then MMDF-LDA assigns annotations for the patches of social images by estimating the probability of annotation-patch assignment. Through experiments in social image annotation and retrieval on several datasets, we demonstrate the effectiveness of the proposed MMDF-LDA model in comparison with state-of-the-art methods.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Zheng Liu, Caiming Zhang, Caixian Chen,