Article ID Journal Published Year Pages File Type
529116 Journal of Visual Communication and Image Representation 2012 9 Pages PDF
Abstract

In this paper we present a redundancy reduction based approach for computational bottom-up visual saliency estimation. In contrast to conventional methods, our approach determines the saliency by filtering out redundant contents instead of measuring their significance. To analyze the redundancy of self-repeating spatial structures, we propose a non-local self-similarity based procedure. The result redundancy coefficient is used to compensate the Shannon entropy, which is based on statistics of pixel intensities, to generate the bottom-up saliency map of the visual input. Experimental results on three publicly available databases demonstrate that the proposed model is highly consistent with the subjective visual attention.

► The bottom-up visual saliency is estimated by spatial redundancy reduction. ► A non-local scheme is proposed to measure the spatial redundancy. ► The model is adaptive to both natural and conceptual images.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,