Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
9653437 | Neurocomputing | 2005 | 6 Pages |
Abstract
While people compare images using semantic concepts, computers compare images using low-level visual features that sometimes have little to do with these semantics. To reduce the gap between the high-level semantics of visual objects and the low-level features extracted from them, in this paper we develop a framework of learning similarity (LS) using neural networks for semantic image classification, where a LS-based k-nearest neighbors (k-NNL) classifier is employed to assign a label to an unknown image according to the majority of k most similar features. Experimental results on an image database show that the k-NNL classifier outperforms the Euclidean distance-based k-NN (k-NNE) classifier and back-propagation network classifiers (BPNC).
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Dianhui Wang, Joon Shik Lim, Myung-Mook Han, Byung-Wook Lee,