| Article ID | Journal | Published Year | Pages | File Type | 
|---|---|---|---|---|
| 6939673 | Pattern Recognition | 2018 | 25 Pages | 
Abstract
												In real applications, data is usually collected from heterogeneous sources and represented with multiple modalities. To facilitate the analysis of such complex tasks, it is important to learn an effective similarity across different modalities. Existing similarity learning methods usually requires a large number of labeled training examples, leading to high labeling costs. In this paper, we propose a novel approach COSLAQ for active cross modal similarity learning, which actively queries the most important supervised information based on the disagreement among different intra-modal and inter-modal similarities. Furthermore, the closeness to decision boundary of similarity is utilized to avoid querying outliers and noises. Experiments on benchmark datasets demonstrate that the proposed method can reduce the labeling cost effectively.
											Keywords
												
											Related Topics
												
													Physical Sciences and Engineering
													Computer Science
													Computer Vision and Pattern Recognition
												
											Authors
												Nengneng Gao, Sheng-Jun Huang, Yifan Yan, Songcan Chen, 
											