Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
10359518 | Image and Vision Computing | 2005 | 10 Pages |
Abstract
This paper presents a new technique for deriving information on visual saliency with experimental eye-tracking data. The strength and potential pitfalls of the method are demonstrated with feature correspondence for 2D to 3D image registration. With this application, an eye-tracking system is employed to determine which features in endoscopy video images are considered to be salient from a group of human observers. By using this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. Features related to visual attention are determined by using a feature normalisation process based on the relative abundance of image features within the background image and those dwelled on visual search scan paths. These features are then back-projected to the image domain to determine spatial area of interest for each unseen endoscopy video image. The derived saliency map is employed to provide an image similarity measure that forms the heart of a new 2D/3D registration method with much reduced rendering overhead by only processing-selective regions of interest as determined by the saliency map. Significant improvements in pose estimation efficiency are achieved without apparent reduction in registration accuracy when compared to that of using an intensity-based similarity measure.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Adrian J. Chung, Fani Deligianni, Xiao-Peng Hu, Guang-Zhong Yang,