Article ID Journal Published Year Pages File Type
558060 Biomedical Signal Processing and Control 2016 10 Pages PDF
Abstract

•The core of the proposed registration method is an effective region detector to determine correspondences.•The registration method is invariant against rotation and small-scale changes.•The registration method can deal with the registration of different viewpoint images when there are common regions in the overlapping areas.•The registration method is computationally efficient in the presence of high-resolution retinal fundus images.•The efficiency and accuracy of the proposed method make it suitable to be applied for further processes like change analysis.

A fundamental problem of retinal fundus image registration is the determination of corresponding points. The scale-invariant feature transform (SIFT) is a well-known algorithm in this regard. However, SIFT suffers from the problems in the quantity and quality of the detected points when facing with high-resolution and low-contrast retinal fundus images. On the other hand, the attention of human visual systems directs to regions instead of points for feature matching. Being aware of these issues, this paper presents a new structure-based region detector, which identifies stable and distinctive regions, to find correspondences. Meanwhile, it describes a robust retinal fundus image registration framework. The region detector is based on a robust watershed segmentation that obtains closed-boundary regions within a clean vascular structure map. Since vascular structure maps are relatively stable in partially overlapping and temporal image pairs, the regions are unaffected by viewpoint, content and illumination variations of retinal images. The regions are approximated by convex polygons, so that robust boundary descriptors are achieved to match them. Finally, correspondences determine the parameters of geometric transformation between input images. Experimental results on four datasets including temporal and partially overlapping image pairs show that our approach is comparable or superior to SIFT-based methods in terms of efficiency, accuracy and speed. The proposed method successfully registered 92.30% of 130 temporal image pairs and 91.42% of 70 different field of view image pairs.

Graphical abstractFigure optionsDownload full-size imageDownload as PowerPoint slide

Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, , ,