Article ID Journal Published Year Pages File Type
532464 Journal of Visual Communication and Image Representation 2014 10 Pages PDF
Abstract

•No a priori assumption is made on the type of structure to be extracted.•Suitable for robust image representation.•Different instances of the method can be created.•In some cases, context-aware features can be complemented with strictly local features without inducing redundancy.•Repeatability scores are comparable to state-of-the-art methods.

Local image features are often used to efficiently represent image content. The limited number of types of features that a local feature extractor responds to might be insufficient to provide a robust image representation. To overcome this limitation, we propose a context-aware feature extraction formulated under an information theoretic framework. The algorithm does not respond to a specific type of features; the idea is to retrieve complementary features which are relevant within the image context. We empirically validate the method by investigating the repeatability, the completeness, and the complementarity of context-aware features on standard benchmarks. In a comparison with strictly local features, we show that our context-aware features produce more robust image representations. Furthermore, we study the complementarity between strictly local features and context-aware ones to produce an even more robust representation.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,