Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
849085 | Optik - International Journal for Light and Electron Optics | 2014 | 6 Pages |
Discriminative model over bag-of-visual-words representation significantly improves the accuracy of object detection under clutter. However, it encounters bottleneck because of completely ignoring geometric constraint between features. On the contrary, to detect object accurately explicit shape model heavily relies on geometric information of the object, which as a result lacks of discriminative power. In this paper, we present a discriminative shape model to make use of the advantages of the two models based on the insight that the two models are essentially complementary. Discriminative model provides discriminative power, while shape model encodes geometry. The cost function that we used to distinguish objects considers both the detection maps of the discriminative model and the result of shape matching. In this cost function, we adopt a novel way to deal with multi-scale detection maps. We show that this cost function has very strong discriminative power, which makes learning a discriminative threshold for full object detection possible. For shape model, we also present a scheme for learning a good shape model from noisy images. Experiments on UIUC Car and Weizmann–Shotton horses show state-of-the-art performance of our model.