کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
528236 | 869540 | 2016 | 17 صفحه PDF | دانلود رایگان |
• A clustering-based dictionary learning is proposed for multimodal image fusion.
• Patches from different sources are clustered with their structural similarities.
• A compact dictionary is constructed by combining principal components of clusters.
• Sparse coefficients are estimated by a simultaneous orthogonal matching pursuit.
• The proposed method requires lower processing time with better fusion quality.
Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.
Journal: Information Fusion - Volume 27, January 2016, Pages 198–214