Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
406681 | Neurocomputing | 2014 | 17 Pages |
•A new framework based on region reconstruction is proposed for multifocus fusion.•We propose an efficient greedy optimization algorithm with a coarse-to-fine strategy.•The visual artifacts are explicitly modeled by three energy terms in our model.•Our method outperforms the state-of-the-art methods in various experiments.
This paper presents a novel region-based framework for multifocus image fusion. The core idea is to segment the in-focus regions from the input images and merge them up to produce an all-in-focus image. To this end, we propose three intuitive constraints on the fusion process and model them into three energy terms, i.e., reconstruction error, out-of-focus energy and smoothness regularization. The three terms are then formulated into an optimization framework problem to solve a segmentation map. We also propose a greedy algorithm to minimize the objective function, which alternatively updates each pixel in the segmentation map using a coarse-to-fine strategy. The fused image is finally generated by combining the segmented in-focus regions in each source image via the segmentation map. Our approach can yield a seamless result with much fewer ringing artifacts. Comparative experiments based on various synthesized and real images demonstrate that our approach outperforms the state-of-the-art methods.