Elsevier

Information Fusion

Volume 20, November 2014, Pages 60-72
Information Fusion

Multi-scale weighted gradient-based fusion for multi-focus images

https://doi.org/10.1016/j.inffus.2013.11.005Get rights and content

Abstract

Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration.

Introduction

Current technology in imaging sensors is able to offer a variety of information that is extracted from an observed scene. Image fusion provides an effective means to combine all sources of information and produce a single fused image containing an enhanced description of the scene. Thus, the fused image is more useful for human or computer operators. Fusion techniques have been applied to a wide range of imageries, such as remote sensing images, infrared and visible images, multi-focus images and medical images.

The basic goal of multi-focus image fusion, in particular, is to merge the focused objects from a series of images with different focus settings to construct a sharp image. Various fusion algorithms have been employed to achieve this goal in the last decade. The most popular algorithms are usually based on the multiresolution method. The input images are transformed first into multiresolution representation through multiresolution decomposition. Different spectral information is then selected and combined to reconstruct the fused image. This method enables the efficient combination of the features that are spectrally independent, even if they are spatially overlapped in the input images. Thus, multiresolution-based fusion algorithms have attracted great research attentions based on different multiresolution decomposition and feature selection techniques in the last decade. Laplacian pyramid (LAP) [1] is a well-known multiresolution decomposition extensively used in image fusion. It is formed as a difference between corresponding levels of the Gaussian and their expanded low-pass approximations. Another significant multiresolution decomposition technique in image fusion is Discrete Wavelet Transform (DWT) [2], [3]. Although the DWT is computationally efficient, it is not shift invariant. In contrast, Dual Tree Complex Wavelet Transform (DT-CWT) [4], [5] has better properties of shift invariance, reduced over-completeness and directional selectivity that are essential for image fusion. Thus, the DT-CWT method usually provides better fusion results than the DWT method [6], [7]. More recently, a more sophisticated Nonsubsampled Contourlet Transform (NSCT) has been successfully applied to image fusion and achieved excellent performance [8], [9], [10].

In recent years, a significant number of fusion algorithms have been performed in gradient domain [11], [12], [13], [14]. Perceptually salient spatial image structures such as edges and corners can be considered as specific spatial arrangements of the gradients. So the gradient information is particularly suited to image fusion for better feature selection and geometry merging. The aim of the gradient-based fusion is therefore to merge all the important gradient information from the input images and transfer it into the fused image. Socolinsky and Wloff [11] show that the matrix of structure tensor can be used to fuse the first-order contrast (i.e., gradient) information for the visualization of multispectral images. A similar approach conducted at multiple scales is introduced in [15] to fuse the local contrast information of a multivalued image with the gradient information obtained by the dyadic wavelet. Another interesting gradient-based fusion algorithm was proposed by Petrović and Xydeas [12]. It is based on a “fuse-then-decompose” technique, in which the authors use a gradient filter to perform a better feature selection in the gradient domain, and then take advantage of the conventional multiresolution method to reconstruct the fused image.

In order to effectively merge the sharp focused objects from different input images, various focus measures have been proposed to identify and extract the focused regions for multi-focus image fusion [16], [17], [18], [19]. An effective alternative is to divide the image into small uniform blocks and select the blocks with the highest contrast to produce the fused image [20]. However, these methods may lead to other problems in the presence of anisotropic blur and mis-registration. Due to anisotropic blur, the defocused part of the image may contain a considerable amount of small regions relatively sharper than those in the focused image along strong edges. These regions tend to be falsely selected and merged to the fused image (see Section 3 for detailed analysis). Another problem is caused by mis-registration, since in this case the defocused objects may also become partly sharper if the corresponding regions in the other images are flat. Fig. 1(c) presents the checkerboard image of two common test multi-focus images (Fig. 1(a) and (b)), where we can obviously see the mis-registrations between the two images, and the two marked image patches suffer from significant anisotropic blur and mis-registration, respectively. Fig. 1(d) shows the sharpness difference of the multi-focus images in the segments of the two marked patches where the defocused pixels are even sharper than the focused pixels. The image sharpness is computed by applying the focus measure of sum-modified-Laplacian [21] to each 5×5 block. Indeed, in the presence of anisotropic blur and mis-registration, the efficiency of the focus measure for block-based fusion methods is significantly related to its scale, i.e., the block size of the measure. A single scale is usually not able to correctly distinguish all the focused and defocused regions, especially when mis-registration happens. Note that the mis-registration usually becomes worse due to object or camera motion, and consequently makes it more difficult to correctly identify the focused regions.

To address the fusion problems raised by anisotropic blur and mis-registration, we present a novel multi-scale weighted gradient-based fusion method for multi-focus images in this paper. First, an image structure saliency that reflects the saliency of local edge and corner structures is proposed and an improve weighted gradient-based fusion method is presented based on this saliency measure. Next, a multi-scale structure-based focus measure is introduced to determine the gradient weights for the proposed method with a novel multi-scale approach in the presence of anisotropic blur or mis-registration. The basic idea behind this method is that the impacts of anisotropic blur and mis-registration can be attenuated at a large scale, and at a small scale the boundaries of the focused region can be roughly determined. Finally, experimental results are reported to demonstrate the efficiency of the proposed fusion method. Our primary contribution in this paper is a fusion method to solve the problems raised by anisotropic blur and mis-registration for multi-focus images. In this fusion method, we present two additional contributions. First, a structure-based focus measure is presented and we demonstrate how it can be used at multiple scales to solve the problems raised by anisotropic blur and mis-registration. Second, we present a novel multi-scale approach to detect the definite focused region, and then determine the gradient weights near the boundaries of the focused region by combining multi-scale information with the structure-based focus measure.

The rest of this paper is organized as follows. In the next section, the conventional gradient-based fusion method with the structure tensor is reviewed. A structure saliency is proposed and an improved weighted gradient-based fusion method is presented based on this saliency measure. Section 3 introduces the multi-scale structure-based focus measure and uses it to derive a multi-scale approach to determine the gradient weights for the proposed multi-focus image fusion method. Experimental results and comparisons are given in Section 4. Conclusions are drawn in Section 5.

Section snippets

Weighted gradient-based fusion

The definition of gradient for a multivalued image has been extensively investigated in previous literatures [11], [22], [23]. To detect the edges of a multivalued image, the gradients from each component of the multivalued image are merged in these literatures with a second-moment matrix called the structure tensor. Thus, a straightforward solution for multivalued image fusion would be to reconstruct the fused image from the merged gradients through variational techniques [11], [13]. In this

Multi-scale weighted gradient-based fusion

The weighted gradient-based fusion method is efficient to identify the most important local structures in the input images and render them into the fused image, and is therefore suitable for a variety of fusion applications, e.g., multi-spectral fusion, multi-focus fusion and medical image fusion. In this section, we mainly address the problems of multi-focus image fusion. In multi-focus images, anisotropic blurring and mis-registration frequently take place due to object or camera motion. They

Experiment results

The performance of the proposed fusion method was tested by visual comparison and quantitative assessment. In our experiments, the large and small scales σ1,σ2 are set to 5 and 0.5, respectively. The computation of the large-scale focus measure is accelerated by down sampling the gradient data and performing the convolution with a much smaller kernel. We compared our fusion results against those of more conventional multiresolution-based fusion methods including the LAP, DT-CWT and NSCT.

Conclusion

A novel multi-scale weighted gradient-based fusion method for multi-focus images is proposed in this paper. The method is based on a multi-scale structure-based focus measure which is derived from a novel image structure saliency measure. This measure reflects the saliency of local edge and corner structures. An improved weighted gradient-based fusion method is then presented based on the structure saliency. Overshoot artifacts produced by the conventional gradient-based fusion with the

Acknowledgments

The colored test images are obtained from website [35]. Some of the gray common test images are available on website [36]. The authors would like to thank the contributors of all the test images for their useful support, as well as to the reviewers for their valuable suggestions.

References (36)

  • C. Yang et al.

    A novel similarity based quality metric for image fusion

    Inform. Fusion

    (2008)
  • A. Akerman, Pyramidal techniques for multisensor fusion, in: Proc. SPIE, vol. 1828, 1992, pp....
  • S.G. Mallat

    A theory for multiresolution signal decomposition – the wavelet representation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1989)
  • L.J. Chipman, T.M. Orr, L.N. Graham, Wavelets and image fusion, in: International Conference on Image Processing, vol....
  • N. Kingsbury, A dual-tree complex wavelet transform with improved orthogonality and symmetry properties, in:...
  • I.W. Selesnick et al.

    The dual-tree complex wavelet transform

    IEEE Signal Process. Mag.

    (2005)
  • P. Hill, N. Canagarajah, D. Bull, Image fusion using complex wavelets, in: British Machine Vision Conference, 2002, pp....
  • A.L. da Cunha et al.

    The nonsubsampled contourlet transform: theory, design, and applications

    IEEE Trans. Image Process.

    (2006)
  • Cited by (337)

    • A color image fusion model by saturation-value total variation

      2024, Journal of Computational and Applied Mathematics
    • Multi-focus image fusion via interactive transformer and asymmetric soft sharing

      2024, Engineering Applications of Artificial Intelligence
    View all citing articles on Scopus
    View full text