Article ID Journal Published Year Pages File Type
536896 Pattern Recognition Letters 2006 12 Pages PDF
Abstract

This paper studies two types of spatial relationships that can be learned from training examples for object recognition. The first one employs deformable relationships between object parts with a Gaussian model, while the second one describes pairwise relationships between pixel intensity values using Bayesian networks. We perform experiments on a human face dataset and a horse dataset, imposing the same amount of annotation of training data, which can be seen as sending knowledge to the learning algorithms. The result indicates that the Bayesian network method compares favorably to the deformable model, as it can capture long-distance stable relations in the object appearance. We also conclude that both methods are superior to strictly spatial matching by template and strictly non-spatial classifiers.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, ,