Multi-Focus Image Region Fusion and Registration Algorithm with Multi-Scale Wavelet

Aiming at the problems of poor brightness control effect and low registration accuracy in traditional multi focus image registration, a wavelet multi-scale multi focus image region fusion registration method is proposed. The multi-scale Retinex algorithm is used to enhance the image, the wavelet decomposition similarity analysis is used for image interpolation, and the EMD method is used to decompose the multi focus image. Finally, the image reconstruction is completed and the multi focus image registration is realized. In order to verify the multi focus image fusion registration effect of different methods, a comparative experiment was designed. Experimental results show that the proposed method can control the image brightness in a reasonable range, the root mean square error of image region fusion registration algorithm is less than 5, and the image registration accuracy is high. This method can achieve multi focus image region fusion registration.


Introduction
After the focus of an optical imaging system is determined, only the space points within the depth of field can be clearly imaged. In practical application, when imaging the object in a scene, as the distances between the objects and the imaging lens are different, the obtained image is not clear for all objects. In order to obtain a clear full scene image, it is necessary to focus on the different objects in the scene, obtain the images of each object, and fuse them together, which refers to multi-focus image fusion technology. Image fusion technology is an important branch of information fusion, which combines the high and new technologies such as sensor, image processing, signal processing, computer, and artificial intelligence [1]. The technology is to synthesize the image or image sequence of a specific scene obtained with two or more sensors at the same time or different time, so as to generate new interpretation information about the scene [2]. Image fusion effectively synthesizes the complementary advantages of each sensor and maximizes the complete description of target or scene information, which helps to locate, identify and explain physical phenomena and events [3]. The main advantages of image fusion are: improving the stability and reliability of the system, improving the spatial resolution, and reducing the performance requirements for a single sensor [4]. In the medical field, the technology can be used to fuse medical images with multi-mode to improve the accuracy of diagnosis and treatment. In military field, it can be used for situation estimation, tactical reconnaissance, and identification. In the field of remote sensing, it can be used to interpret and classify aerial and satellite images [5,6]. Registration is an important part in image fusion. The current image fusion registration algorithm cannot control the brightness of the image in a reasonable range, and the accuracy of the registration is low [7].
Jia proposed an image registration method with high precision [8]. Image edge line is extracted by using SUSAN algorithm. According to the geometric center of gravity, the image edges are divided into the closed edge line and non-closed edge line. The edge features of the image are extracted by calculating the extreme point of the image edge. OLB descriptor is used to describe the image edge feature. Digital image registration is achieved according to Hamming distance. However, the algorithm cannot control the brightness of the image in a reasonable range. Li et al. [9] proposed a multi-band image fusion registration method. The optimal fusion performance is obtained by finding the optimal registration parameters. The definition index based on the area of interest of human eye is used as an evaluation function to improve the registration process. The joint optimization is achieved by particle swarm optimization algorithm to realize image registration. However, the accuracy of image registration is low. Hou et al. proposed an SURF-based image registration fusion algorithm. The distance measure function of rough matching in SURF algorithm is improved to enhance the registration speed of SURF algorithm. Image registration is achieved by using RANSAC algorithm. Wavelet transform fusion of the registered image is carried out. However, the algorithm cannot control the brightness of the image reasonably [10]. Image fusion registration algorithm based on FFST and the contrast of and direction characteristics. In this algorithm, the sparsity constraint is applied in the optimization function of the basic nonnegative matrix decomposition. High frequency subband coefficients are selected by contrast of joint direction characteristics. Image registration is obtained by using fast finite shear wave inverse transform. However, the algorithm cannot complete the image registration accurately [11].
To address the problems of the current methods, a multi-focus image region fusion and registration algorithm with multi-scale wavelet is proposed in this paper. The paper is organized as follows.
1. The multi-scale Retinex algorithm is used to enhance the image and improve the recognition rate of the image. 2. Image interpolation is obtained with wavelet transform and similarity technique. EMD method is used for image decomposition. The multi-focus image is reconstructed with different weighting coefficients to achieve image registration. 3. Experimental results and analysis. The overall effect of the proposed algorithm is verified in two aspects: The image brightness and root-mean-square error of image interpolation. 4. Conclusions. The research works are summarized, and the future development direction is presented.

Image Enhancement with Multi-Scale Retinex Algorithm
The purpose of image enhancement is to improve the image quality by image processing. In order to improve the recognition rate of the image and the accuracy of image registration, the current image enhancement algorithm is improved [12]. The input color image I is decomposed into 3 grayscale images R, G, and B. Each image is processed with the Gaussian function as an environment function. The environment function Fðx; yÞ is expressed as where r is the standard deviation of Gaussian function. The normalization condition is given by The multi-scale Retinex algorithm can compress the dynamic range of the image and ensure the color consistency of the image [9]. The multi-scale Retinex algorithm is described as where the subscript i represents the ith spectrum band, N is the number of the spectrum bands,  1) and (2). The offset method is used to modify the pixel of the output image.
where G is the gain coefficient, offset is the offset, g is the image enhancement control coefficient. The modified image gray value is mapped to the gray range of the display by using Eq. (5) to obtain the enhanced image.
where R i ðx; yÞ is the output image, r max and r min are the maximum gray value and the minimum gray value of the modified image.
On the basis of the Gaussian function as the environment function, the multi-scale Retinex algorithm is used for image enhancement. The offset method is used to modify the pixels of the output image, and the modified result is mapped to the grayscale of the display to obtain the enhanced image.

Image Interpolation after Enhancement
Image interpolation is for preprocessing of image registration and fusion [13][14][15]. Wavelet decomposition of the original image f ðx; yÞ is carried out. The components of the low frequency approximate component and the components of the horizontal direction, the vertical direction, and the diagonal direction are denoted as f 0 ; f 1 ; f 2 ; f 3 . Further Wavelet decomposition of f 0 is implemented to obtain f 0 , f 1 , f 01 , f 2 , f 02 , and f 03 . The similarity analysis is carried out and the similarity parameters are obtained. The obtained similarity parameters are obtained to estimate the high frequency components of the interpolated results. The original image is used as the low-frequency component of the interpolation result, and the four components are replaced by the inverse wavelet transform, and the final interpolation results are obtained.
In Fig. 1, DWT represents discrete wavelet transform. For the Step (3), the similarity analysis between the two adjacent layers of the same direction after wavelet decomposition is as follows.
The size of the pixel decomposition result of f 01 is the same as f 0 . Similarity between f 01 and each row corresponding to f 0 is calculated by using where f i 0 and f i 01 are the ith elements of f 01 and f 0 . Eq. (7) is solved for a i corresponding to the minimum value of the objective function to obtain the similarity coefficient between f i 0 and f i 01 .
f i 1ðinterpolationÞ denotes the interpolation result of the horizontal component of the ith row, which is given by

EMD-Based Image Registration
Image decomposition is implemented with EMD. Different weighting coefficients are used to reconstruct multi-focus image according to the complexity of each layer image to achieve the region fusion registration of multi-focus image.
By using EMD, the multi-focus image f ðx; yÞ is decomposed into n À 1 layers f ðiÞ ðx; yÞ ði ¼ 1; 2; Á Á Á ; n À 1Þ and the remnant r ðnÞ . Image is enlarged with the combined interpolation on each layer and the enlarged image is expressed as where f ðlÞ and r ðnÞ are the mode functions. Weighting coefficient determines the weight of bi-cubic interpolation. For images with high complexity, q should be set to a larger value. For a flat-variant image, a smaller value should be taken for q. As the difference between images is large, if a weighted coefficient  Figure 1: Image interpolation based on wavelet transform and similarity is used for the whole image, it is difficult to achieve the desired effect. According to the complexity of each layer, a combination of different weighting coefficients q i is used respectively. Finally, the enlarged image is reconstructed by the EMD layer after the combined interpolation. The size of the variance reflects the complexity of the image. The weighting coefficients of each layer can be determined by the weight of the variance of this layer accounting for the sum of the variance of all layers, which is given by where n is the number of the layers after EMD decomposition, the nth layer is the remnant, usually a monotonous flat, which can be bi-linearly interpolated. The combined interpolation of other layers is according to the weighting coefficients, r 2 i is the variance of the ith layer, r 2 j is the variance of the jth layer. Assume wavelet decomposition coefficient of the images to be registered f 1 ðx; yÞ and f 2 ðx; yÞ are F 1 ðx; yÞ and F 2 ðx; yÞ. The image registration is achieved by using Fðx; yÞ ¼ q i F 1 ðx; yÞ; F 1 ðx; yÞ j j! F 2 ðx; yÞ j j q i F 2 ðx; yÞ; F 1 ðx; yÞ j j< F 2 ðx; yÞ j j (13)

Experimental Results and Analysis
Experiments are carried out to verify the effectiveness of the proposed algorithm. The experiment platform is Simulink. Wavelet multi-scale and multi-focus image region fusion and registration algorithm, multi-band image fusion and registration method, and SURF-based image registration and fusion algorithm are applied for color image enhancement. The gray histograms of image enhancement by three image fusion and registration methods are obtained. The results are shown in Fig. 2.
From Figs. 2a and 2b, it can be seen that, gray distribution of multi-band image fusion and registration method and SURF-based image registration and fusion algorithm is in a narrower interval. The brightness of the back and middle is higher and concentrated at a far distance from the origin of the coordinate axis. The result of image enhancement is poor. From Fig. 2c, after image enhancement, the dynamic range of the gray value of the image is obviously increased, and the brightness of the image is also enhanced. The brightness of the image can be controlled within a reasonable range, and the obtained image enhancement effect is better.
Root-mean-square errors of the three methods are compared. The smaller root-mean-square error represents the higher accuracy of the image registration and the better result of the image fusion. The results are shown in Fig. 3 and Tabs. 1 and 2.
Analysis Tabs. 1, 2 and Fig. 3 show, it can be seen that, the root-mean-square error of multi-band image fusion and registration method is up to 15, the root-mean-square error of SURF-based image registration and fusion algorithm is about 10, and the root-mean-square error of image region fusion registration algorithm based on wavelet multi-scale is below 5. Comparison results show that the root-mean-square error of the proposed method is lowest and the accuracy is high.

Conclusion
Image fusion involves many fields, such as information fusion, sensor, and image processing, which is a new research direction. Image registration is the basis of image fusion. The current image registration method cannot control the brightness of the image in a reasonable range for image enhancement, and the accuracy of the registration is low. In this paper, a multi-focus image region fusion and registration algorithm with multiscale wavelet is proposed. The first step is image enhancement. The second step is image interpolation. The third step is image registration.
Because of the variety of image sensors and the different application environment, there are still many problems for further research.
1. Accurate registration of source image is very important for image fusion. The accuracy of registration has a direct impact on the quality of the final fusion image. Most of the current image fusion algorithms are carried out on the premise of assuming that the source image has been strictly registered. Therefore, the effect of the fusion algorithm cannot be fully reflected. At present, the research on image registration has developed, but it is far from the application requirements, especially for the problem of image registration under the non-ideal condition of the heterogenous sensor and the real time registration of the video sequence. 2. In image fusion based on multi-scale analysis, different analysis tools will result in different fusion results. The selection of appropriate analytical tool according to fusion task has never been conclusive. 3. Image fusion technology involves many fields of image processing. It is necessary to strengthen the connection between the fusion algorithm and the new theories in various fields, such as compressive sensing and super-resolution technology. How to combine the new mathematical methods, such as fuzzy clustering and neural network technology with multi-scale analysis tools, combined with the  characteristics of human vision, to develop a more complete and reasonable fusion strategy, and achieve intelligent and adaptive fusion, is a direction worth researching in depth.
Funding Statement: Characteristic Innovation Projects of Ordinary Universities in Guangdong Province Project Number: 2019GKTSCX029.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.