کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
526866 | 869251 | 2015 | 17 صفحه PDF | دانلود رایگان |

• A combination of coarse alignment and refinement is employed in the method.
• Input images are coarsely aligned by fitting and normalizing the matched MSERs.
• A phase congruency and point set registration based refinement step is used.
• The method accurately and efficiently aligns images with affine distortions.
• The method is robust to illumination changes.
This paper proposes a novel method to address the registration of images with affine transformation. Firstly, the Maximally Stable Extremal Region (MSER) detection method is performed on the reference image and the image to be registered, respectively. And the coarse affine transformation matrix between the two images is estimated by the matched MSER pairs. Two circular regions containing roughly the same image content are also obtained by fitting and normalizing the centroids of the matched MSERs from the two images. Secondly, a scale invariant and approximate affine transformation invariant feature point detection algorithm based on the Gabor filter decomposition and phase congruency is performed on the two coarsely aligned regions, and two feature point sets are achieved, respectively. Finally, the affine transformation matrix between the two feature point sets is obtained by using a probabilistic point set registration algorithm, and the final affine transformation matrix between the reference image and the image to be registered is achieved according to the coarse affine transformation matrix and the affine transformation matrix between the two feature point sets. Several sets of experiments demonstrate that our proposed method performs competitively with the classical scale-invariant feature transform (SIFT) method for images with scale changes, and performs better than the traditional MSER and Affine-SIFT (ASIFT) methods for images with affine distortions. Moreover, the proposed method shows higher computation efficiency and robustness to illumination change than some existing area-based or feature-based methods do.
Journal: Image and Vision Computing - Volume 36, April 2015, Pages 23–39