Image Redundancy Filtering for Panorama Stitching

In this paper, we designed a novel framework for massive image panorama stitching, which aims to resolve image redundancy, alignment error accumulation and perspective distortion accumulation of the stitching process. First, an iterative method is designed to filter the redundancy images by analysing the similarity relation of adjacent images. Second, to reduce alignment error accumulation, the weight topology graph of filtered images is constructed, which is employed to find the optimal reference image which closes to the central geometrically by the Floyd-Warshall algorithm. Finally, a two-step global alignment strategy is designed to initial align images and perform shape optimization, the first step is that filtered images employ the similarity model to roughly align group by group, the second step is to perform shape optimization further through refining all alignment parameters by the homography model under the anti-perspective energy term, which aims to obtain an optimal solution by balancing the alignment accuracy and the global consistency. Compared with the state-of-the-art methods, the proposed method successfully reduces image redundancy while improving the alignment and reducing perspective distortion for massive image panorama stitching.


I. INTRODUCTION
Unmanned Aerial Vehicles (UAV) and Aerial remote sensing are the most common ways to quickly acquire high-quality images from the observing region. Since the observing region of single image is limited, we need to combine multiple images with common overlap into a single wide-scale panorama image by panorama image stitching technique [1] for viewing the observing region conveniently. Recently, image stitching technique is widely used in all aspects of computer vision, such as stereoscopic image stitching [2], [3], video surveillance [4] and virtual reality [5].
The first critical step of the stitching process is to accurately estimate the transformation relation of all images to the stitching plane, which directly determines the quality of the final stitching result [6]- [8]. If the images are captured The associate editor coordinating the review of this manuscript and approving it for publication was Yizhang Jiang . from an approximate plane, the homography model is usually employed to align them to achieve a high-accuracy alignment solution, since it has higher degrees of freedom (DoFs) to adjust transformation relation of images [9]- [12]. Considering the stability and efficiency of the stitching process, many approaches utilized feature-based matching method to discover the correspondence relationship and estimate the transformation relation between images [13]- [16] since features (such as point, line or edge features) are easy to extract and reliable, especially point features [17], [18]. Although many studies on image stitching have been presented in the past decade, some challenging problems still need to be further solved, such as image redundancy, error accumulation, the balance between the alignment accuracy and the global consistency.
To obtain a high-accuracy alignment of the stitching result, the bundle adjustment approach [19], [20] is usually employed to optimize the alignment parameters globally by VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ minimizing the total reprojection errors [21]. In addition, combining and localizing feature and area to align is a robust way to improve accuracy [22]. Many methods divide the alignment process into two steps: obtaining a robust alignment solution first and performing global optimization later. Xing et al. [23] proposed to first obtain an initial alignment by employing the Extended Kalman Filter [24] onto the local region, then all parameters are optimized in the global. To solve the non-linear optimization problem, Kekec et al. [25] first applied the affine model to estimate the initial alignment solution and then applied the homography model to perform global optimization under the global consistency constraint.
For images with parallax, recent state-of-the-art methods show that local alignment model has the advantage to handle parallax since it has more flexible than global alignment by dividing each image into two or more frames and working on each frame. Gao et al. [26] proposed to divide each image into two dominant planes and then use the linear combination of two homography matrices to produce the panoramic image. Zaragoza et al. [27] proposed the as-projective-as-possible (APAP) method to estimate multiple local homography matrices of the grid by a weight method, which can achieve a high-accuracy alignment and reduce the ghosting effect significantly. The references [28], [29] utilized a weighting strategy to combine multiple local homography matrixes of non-overlapping with a global similarity matrix, which allows continuously adjusting transformation matrixes of the overlapping areas into the non-overlapping areas to achieve transformation smoothly. Zhang et al. [30] proposed to combine the homography model and content-preserving warping to design a hybrid alignment model that provides more flexibility to solve parallax and eliminate local distortion. Seam optimization [31] is also an effective way to improve the quality of the stitching image. Li et al. [32] first proposed an automatic piecewise dynamic program to detect the optimal seamline, and then proposed a cosine distance weight blending method to ensure that the seamline is as invisible as possible. To reduce visible seams, multi-band blending [16] and gradient domain [33] are powerful techniques to seamless blend overlapping areas. To eliminate local misalignment, Li et al. [34] proposed to utilize local deviation correction to optimize the parameters with global similarity constraint. To better remove the outliers, Li et al. [35] employed RBF (radial basis function) approximation to design an outliertolerant alignment method that can efficiently remove the outliers by screening out the abnormal weights of RBFs.
However, the above methods are only suitable for stitching dozens of image to a qualified panorama image, which are prone to producing a severe accumulation of perspective distortion for massive images stitching, that means the global consistency of stitching result is broken. To address this problem, Caballero et al. [24] designed a hierarchical model according to the alignment accuracy, where the images with large parallax are solved by the model with less DoFs. Xie et al. [36] proposed a group model according to the alignment accuracy, where the image group with large parallax is connected to the image group with small parallax by the homography model, and then the image group with small parallax is solved by the affine model. Xia et al. [37] proposed to allow continuous transition from the model with less DoFs to the model with more DoFs under the predefined constraint. The essence of these methods is keeping the balance between improving alignment accuracy and reducing distortion accumulation.
For large-view image stitching, the topology relation of images is usually utilized to achieve a global alignment [38], [39]. Elibol et al. [40] combined the fast tentative matching and Minimum Spanning Tree (MST) algorithm to discover the correspondence relation between images, which reduces the number of matching attempts significantly. To select the optimal reference image as the stitching plane, Szeliski [41] proposed a proper solution that is to choose one of the images which is close to the central geometrically as the reference image, since setting the central image as the stitching plane usually produces the least error propagation. To achieve the above supposition, Choe et al. [42] proposed first to calculate the alignment error between each image pair, then select the image with the lowest alignment error accumulation as the reference image by utilizing the graph algorithm. To improve computation speed, Zheng et al. [43] designed a multi-row strategy to scan images and then selected the subset of original image set to stitch.
In this paper, we proposed a novel framework for massive image panorama stitching, which aims to resolve image redundancy, alignment error accumulation and perspective distortion accumulation of the stitching process. The contributions of this paper are summarised as follows: 1) Based on the similarity matrix designed by topological structure information of original image set, we propose an iterative method to filter redundancy images from original image set. Compared with the existing methods, the proposed method greatly improves the stitching efficiency under the condition of obtaining a high-quality stitching result without information loss. 2) A weight equation is designed to build the topology graph of filtered image set, moreover, to find the optimal reference image by the Floyd-Warshall algorithm, which can reduce alignment error accumulation. 3) A two-step global alignment strategy is proposed to align and perform perspective optimization, which can improve the alignment accuracy while reducing distortion accumulation for massive images stitching.
The rest of this paper is organized as follows: In Section II, we describe the proposed image redundancy filtering for panorama stitching in detail, then the experimental results are presented in Section III. Finally, the conclusion of our work is stated in Section IV.

II. THE PROPOSED METHOD
In this section, we introduce the proposed framework for massive image panorama stitching in detail, which aims to speed up the stitching process by filtering redundancy images meanwhile obtain a high-quality panoramic image without information loss. The proposed framework includes three modules: image redundancy filtering, optimal reference image selection, and alignment and shape optimization. First, in image redundancy filtering, we designed an iterative method based on similarity analysis to filter images from original image set. Second, for reference image selection, error accumulation is unavoidable when all the images are projected to the same stitching plane by a series of transformations, we design an optimal reference image to reduce error accumulation. The optimal reference image is chosen on our designed weight topology graph of filtered image set by the Floyd-Warshall algorithm. Finally, for images with parallax, a high-accuracy alignment is prone to causing severe distortion accumulation. To obtain a trade-off between the alignment accuracy and distortion accumulation, a two-step global alignment strategy is designed to initial align images and perform shape optimization further, here, the first step is that filtered images are roughly aligned between groups by the similarity model, the second step is that all alignment parameters are optimized globally by the homography model under the anti-perspective energy term, which aims to obtain an optimal stitching result. Fig. 1 shows the flowchart of the proposed framework.
To provide an accurate understanding, the vital variables and notations are summarized in Table 1.

A. IMAGE REDUNDANCY FILTERING
In this section, we remove the redundancy images from original image set meanwhile guarantee the final stitching image without information loss. In massive images stitching, since the overlapping region between adjacent images may be very large, the efficiency of stitching is low when stitching these adjacent images. Therefore, we need to remove these images which have a high overlapping region with their adjacent images. The designed filtering method is achieved through analysing the similarity relation of image pairs based on the topology information and simultaneously considering the connectivity of filtered image set. The process of our filtering method is presented as follows: 1) we find the image pair which has the maximum similarity value in the similarity matrix s, if its similarity value is higher than the similarity threshold value, turn to the step 2); otherwise, exit the filtering process. 2) we remove one of the image pair (here, we use A and B denote a matched image pair) from the step 1), according to the designed critical criterion (the following equation (1)); 3) update the similarity matrix s of current image set.
The similarity relation matrix s is defined by the number of matched feature points between two images. s(i, j) denotes the similarity of image I i and I j , its value is the number of matched feature points between the two images. In this paper, to improve computation efficiency, we employ a feature matching method based on topology analysis from the Ref. [37] to estimate the similarity relation matrix s of original image set.
If the similarity value of two images is large, we attempt to remove one of the two images from the current image set. According to this idea, we design a critical criterion to choose one of them as the redundancy image and then remove it from pre-stitched image set. The filter critical is achieved by calculating the sum of similarity of each member of the image pair with their common adjacent image set, respectively. If the image has a smaller similarity value sum, which means it has a suitable overlap area with each common adjacent image. The other one has a relatively more overlap area with some of the common adjacent images and has relatively less overlap area with others of the common adjacent images. Obviously, this one is inappropriate to reserve to produce the final stitching image. Therefore, the critical criterion is denoted as follows: where n represents the number of filtered images, A and B denote an image pair with the maximum similarity value. We sum all items of the similarity matrix without someone related with each member of the image pair, respectively. In this paper, we assume that A and B are an image pair with the maximum similarity value. If the sum of the similarity value of image A with other images is smaller than that of B, image A is the redundancy image; otherwise, B is the redundancy image. We apply the iterative approach to find the redundancy image and remove it again and again until the maximum value of the similarity matrix of current image set becomes smaller than or equal to our designed similarity threshold value T. Note: the similarity threshold value T is determined the number of filtered images. In the experiment, we show the relationship between the number of filtered images and the similarity threshold value T.
The pseudo-code of the proposed filtering method is listed in Table 2.

B. REFERENCE IMAGE SELECTION
To represent the final stitching image, a natural approach is to select one of pre-stitched images as the reference and then project all of other images into the reference plane. As we know, if an image has no overlap with the reference image, it is projected to the reference plane through using the related intermediate images to continuously transform. In the process of continuously transform, error accumulation is inescapable. To reduce error accumulation, a reasonable solution is regarding the image which makes intermediate images less as the reference image. In other words, the most geometrical central image is the best choice to select as the reference image, which produces the least accumulated errors in the process of aligning all the other images to the reference plane.
Next, we show the process of optimal reference image selection. First, a weight topology graph is constructed based on the similarity matrix of filtered image set obtained in the above section. According to the similarity matrix, we use the edges to link the image pairs whose similarity values are nonzero. There are two existing ways to set the weight of the edge: set the edge as the reciprocal of the total number of matched features between the image pair [33] and set the edge as the alignment error between the image pair [35]. The first way is efficient since it only utilizes the existing condition of matched features, the second way can accurately search the optimal reference image with the least error accumulation of stitching result, but it has to calculate the alignment error between each image pair in advance, which will reduce the stitching efficiency. Inspired by these two ways, we creatively proposed a weight equation which considers the correlation between the number of matched feature points and the alignment accuracy. The weight equation is defined as follows: where s(i, j) denotes the number of matched feature points between I i and I j , ε is a constant for regularization. In our experiment, the constant ε is set as 1 empirically, which aims to avoid the denominator of the weight is zero. According to the weight topology graph, we can find the optimal reference image by finding a node whose sum of weight value of the shortest paths to all the other nodes is minimal. To achieve this mission, the Floyd-Warshall algorithm [44], [45] is employed to calculate the shortest paths from all nodes to all nodes. An n * n size cost matrix C is constructed and C(i, j) records the weight value of the shortest path from I i to I j . We choose the image with the least total weight value of the shortest paths from all the other images to it as the reference image.
In the proposed method, the reference image is selected by the weight value of the topology graph. The weight is set according to the number of matched feature points and the alignment accuracy. In this paper, we use Fig. 2 to show the influence of different weights on the final stitching result on our test image set with 61 images. Fig. 2 shows the selection results of different weight equations. Fig. 2(a) shows the topology and the stitching result of the weight with just considering the number of matched feature points. Fig. 2(b) shows the topology and the stitching result of the weight obtained by Eq. (2). From this figure, we can see the reference image is different by different weight methods. We also measure the alignment accuracy of the stitching image by Root-Mean-Square (RMS) errors. The smaller of RMS, the higher of the alignment accuracy. The RMS of Fig. 2(a) and (b) is 1.69873 and 1.6578, respectively. This demonstrates that the proposed weight method can obtain higher alignment accuracy than that of just considering the number of matched features.

C. IMAGE ALIGNMENT AND SHAPE OPTIMIZATION
The quality of stitching result is determined by both the alignment accuracy and the global consistency. To obtain a high-accuracy stitching image, the homography model is usually used to align images since it has higher degrees of freedom (DoFs). However, it is prone to making a severe accumulation of perspective distortion (in other words, destroy the global consistency) when aligning massive images captured from an approximate plane region. Therefore, we have to well balance the alignment accuracy and the perspective distortion to obtain the optimal solution. Since the similarity model could reduce perspective distortions, we employed it to roughly align, then the homography model is employed to optimize the alignment accuracy under the anti-perspective energy term.

1) ROBUST ALIGNMENT BY GROUP SIMILARITY MODEL
The similarity model has shown that similarity transformation has an advantage in reducing perspective distortions. Inspired by these studies [28], [29], we use it to achieve a robust initial alignment. On the one hand, the four-parameter similarity model can well resist the perspective distortion for aligning the images with parallax. On the other hand, the alignment parameters obtained by the similarity model are qualified as the initial solution for the following shape optimization using the homography model. Based on the above weight topology graph of filtered image set, the alignment order is ensured by setting the reference image as the root node and performing the breadth-first search, which is used to group filtered image set. After the images are grouped, images are aligned to the reference plane group by group, which has a better effect than the way one by one in reducing the error accumulation of alignment. In the initial alignment process, the alignment error between the aligned images and the following pre-aligned image group, and the alignment error inside the pre-aligned image group are two basic factors we should consider, which will be solved together in the cost function. Let S m = S I i , i = 1, 2, ...n 1 be the similarity model parameters of the aligned images, S m+1 = S I i , i = 1, 2, ...n 2 be the similarity model parameters of the following pre-aligned image group. We can obtain the optimal solution by minimizing the cost function as follows: where the first energy term E 1 (S|G m , G m+1 ) represents the total number of error propagation between image group G m and G m+1 , S = S m ∪ S m+1 , S is the union set of S m and S m+1 , it is defined as follows: In the equation above, the second energy term E 2 (S m |G m ) represents the total number of error propagation inside group in G m , it is defined as follows: where p k denotes that apply similarity transformation S I i to the point p k I i ,I j , t(x) denotes that transform the homogeneous coordinate of a 2D point into the non-homogeneous coordinate, M i,j denotes the number of matched feature points between I i and I j .
Since the equation (5) is a linear overdetermined equation (similar explanation is shown in our previous work [2], [3]), which can be easily solved by Singular Value Decomposition (SVD) solution.

2) SHAPE OPTIMIZATION BY HOMOGRAPHY MODEL
The similarity model parameters estimated group by group are achieved a rough alignment, which aims to well reduce the perspective distortion of the stitching result. Since the DoFs of the similarity model is limited, severe alignment errors VOLUME 8, 2020 usually arise when aligning images with parallax, which will produce many misalignments in stitching results. Therefore, we have to perform shape optimization further by increasing alignment accuracy. To increase the alignment accuracy and simultaneously not induce the perspective distortions, we combine two mutually opposing terms into the energy function by a weight way, which is used to balance the contribution of the alignment accuracy and the global consistency to the final stitching result.
The first optimization term is to optimize alignment. In the above section, the similarity model parameters of all the images can be obtained, denoted as X = S I i , i = 1, 2, ...n 1 , which are employed as the initial solution for optimizing homography model parameters. The homography model parameters of each image relating to the stitching plane will be optimized in the energy function. The first energy term aims to minimize the sum of feature alignment errors between images, it is defined as follows: Images have more freedom to adjust their transformation relations to the stitching plane, with the increase of parameters of alignment models. It causes an overall improvement in alignment accuracy. Besides, the remaining errors will be evenly distributed under the uniform energy term.
The second optimization term aims to maintain the global consistency by resisting perspective distortions, which provides a constraint to make the optimal homography model parameters close to the previously obtained similarity model parameters. This term is obtained through performing the transition from the similarity model parameters to the homography model parameters. Hence, the energy term is denoted as follows: where the definition of p k I i ,I j (H I i ) and p k I j ,I i (H I j ) are similar to Eq. (5).
The final energy term is defined by linearly combining the above energy terms defined in Eqs. (6) and (7) as follows: where λ represents the weight coefficient, which is used to balance the two terms E 1 and E 2 . Since the constraint is not strict, we should set λ as a relatively small value. In our experiment, λ set as 0.01. As a typical non-linear least squares problem, we employ the Levenberg-Marquardt solution [45] to solve Eq. (8).

III. EXPERIMENTAL RESULTS AND ANALYSIS
We applied four representative image datasets from the Ref. [37] and open source community to evaluate the proposed method. The first image dataset consists of 61 images, is obtained by an unmanned aerial vehicle (UAV). The second image dataset consists of 56 images. The third image dataset consists of 160 images, is obtained by an unmanned aerial vehicle (UAV). The fourth image dataset consists of 744 images, is taken at an altitude of 780 meters above the urban area. To verify the robustness of the proposed method, the strip-shaped image set and the square-shaped image set are selected from the first image set and second image set, respectively. To demonstrate the superiority of the proposed method, a few comparison experiments are conducted to demonstrate the advantages of the proposed method. The comparisons are made on the estimated topologies and the final stitching results of the aforementioned datasets generated by different methods.
The number of filtered images is denoted as follows: where N f represents the number of filtered images, N represents the number of all images, α represents the ratio of redundancy images to all images. In image redundancy filtering, we remove a redundancy image depending on the maximum value of the similarity matrix of current filtered image set. Therefore, once the number of filtered images is determined, we can obtain the current threshold of matched feature points(i.e., the maximum value in the similarity matrix of current filtered image set). To facilitate the understanding, we use Fig. 3 to show the relationship between the maximum value of the similarity matrix of filtered images and the number of filtered images on our test image set with 61 images. In our experiments, α is set as 0.5 empirically. From this figure, we can see the number of matched feature points is 863 (that is the similar- ity threshold value) when the number of filtered images is ensured by Eq. (9).
In this paper, we use Eqs. (3) and (8) to align images. To evaluate the effectiveness, we conduct one comparison experiment ( Fig. 4 and Fig. 5) with the single homography model, the single similarity model, the combination models the similarity model with the group model, and the proposed method. Fig. 4(a) and Fig. 5(a) show that the single homography model makes misalignments and accumulates severe perspective distortions. The single similarity model (Fig. 4(b) and Fig. 5(b)) resists perspective distortion and obtains low alignment accuracy since it is prone to accumulating alignment errors. The group similarity model, which is the first step of the proposed alignment method, can provide a rough alignment and resist perspective distortions by combining the advantage of the similarity model and the group model (see Fig. 4(c) and Fig. 5(c)). Although the group similarity model can just provide a rough alignment, there still exist slight misalignments. The following optimization step of our alignment method can well eliminate misalignments. From Fig. 4(d) and Fig. 5(d), we can see that the proposed method improves alignment and resists perspective distortions. Fig. 6 shows the results of our image redundancy filtering method. Fig. 6(a) shows the topology estimation of the original image sets obtained by Xia's method [37]; Fig. 6(b) shows the topology estimation of the filtered image set obtained by the proposed method. Obviously, after redundancy images filtering, the number of the image set which is used to produce the final stitching result decreases significantly, the distribution of the filtered images becomes sparse and uniform, but the member of the filtered image set cover the observing region. In Fig. 6, if the matched feature points of image pair are more than 100, we use the green line to connect, otherwise, we use the grey line to connect. We can see that the topology estimation of the filtered image sets has little or seldom green edges, which means the filtered image set can be connected and the overlapping rate of each two adjacent images is enough to estimate the transformation relation between them. Fig. 8 shows the comparison results of PTGui, Xia's method and the proposed method. As shown in the first two rows of Fig. 8, the PTGui and Xia's method could achieve the global consistency (i.e., reduce global perspective distortions). However, the two methods could cause misalignments in local region (see the first row and second row of each example in Fig. 8). The proposed method considers the local alignment and the global consistency by the group similarity model and the homography model. Compared with these methods, the proposed method has higher accurate alignment than other methods in local alignment accuracy, while providing a good performance in the global consistency (see the third row of each example in Fig. 8).
To show the superiority of the proposed method in alignment accuracy, the Root-Mean-Square (RMS) errors is used to describe the alignment accuracy. The results are shown in Table 3. For panorama image stitching, the traditional approach is to select one of the images as the reference and then warp all of the other images into the reference coordinate system one by one (the method is named as the single model). The group model is that images first are grouped according to the stitching order, then minimize the alignment errors within the image group and between adjacent image groups. As shown in Table 3, we can see that the group similarity model can reduce error accumulation better than the single similarity model. The single homography model can improve alignment accuracy but tend to suffer distortion accumulation (shown in Fig. 5(a)). Based on the group similarity and the homography model, the proposed method employs the homography model to increase alignment accuracy by our energy function Eq. (8). As shown in Table 3, the proposed method produces smaller RMS than other methods. In addition, the remaining errors will be evenly distributed under the uniform energy term, which does not induce distortion VOLUME 8, 2020  accumulation. From these experimental results, we conclude that the proposed method increases alignment accuracy for stitching panorama images by the proposed group model and homography optimization.
The final stitching results of the proposed method are compared with those of PTGui and Xia's method on visual effects, which aims to demonstrate that the proposed method does not cause information loss and does not destroy connection for panorama stitching. As shown in Fig. 8, we can see that our stitching results have a similar visual effect to the stitching results of the other two methods obtained with the original image set on the whole, but our approach generates the final stitching image with much less images. The number of the stitching images generated by PTGui and Xia's method is separately 61, 56, 160 and 744, that of the proposed method is separately 34, 27, 86 and 375. Given images to be stitched, the desired final image should contain as much information as possible. We applied the information ratio [3] comparison to detect whether our stitching results have more information loss than the other two methods. The information ratio is designed to describe how much useful information an image contains. It can be presented as follows: where I r represents the information ratio, N blackpixels represents the number of useless pixels (such as black pixels or 209120 VOLUME 8, 2020 white pixels) in the stitching result. W Stitched and H Stitched represent the width and height of the stitching result, respectively. The results of the information ratio are presented in Table 4.
As shown in Table 4, the information ratio values of the proposed method are close to those of the other two methods. The results demonstrate that the proposed method can't result in more information loss than the other VOLUME 8, 2020  Table 4. two methods, although the proposed method filters some images.
In addition, a running time comparative experiment is constructed to demonstrate the efficiency of the proposed method. In this paper, we adopt parallel computing to optimize the proposed method. However, Xia's method takes serial computing to optimize each stage of the stitching process. To verify the effectiveness of the proposed method reasonably, we presented the running time of Xia's method, the proposed method with serial computing, and the    proposed method with paralleling computing (i.e., the proposed method), respectively. The experimental environment is with a 3.2GHZ Intel i5-3470 CPU and a 16 GB memory. The running time of different methods is shown in Table 5. As shown in Table 5, compared with Xia's method, the efficiency of the proposed method with serial computing is improved obviously. The efficiency of the proposed method with parallel computing (i.e., the proposed method) is greatly improved compared with Xia's method.
To further evaluate, we invited 10 participants to evaluate and then summarized the evaluation values in Table 6. For the testing image datasets in Fig 8, we displayed the original image datasets and the corresponding stitching results obtained by PTGui, Xia's method and the proposed method in random order. The participants evaluated stitching image quality from multiple angles, including misalignment, shape distortion and eye fatigue. The participants marked images from 1 (extremely terrible) to 5(extremely excellent). As shown in Table 6, the evaluation values of PTGui are lower than other methods, since PTGui produces more obvious misalignments as Fig.8 shown. Xia's method and our method both consider the global consistency and the alignment accuracy, however, the proposed method takes less images to stitch, which reduces the error accumulation. Therefore, the evaluation values of our stitching results are higher than those of other methods.

IV. CONCLUSION
In this paper, a novel panorama stitching framework based on image redundancy filtering is proposed, which contains three key modules: image redundancy filtering, reference image selection and global alignment. To effectively filter redundancy images from original image set, an iterative method based on similarity analysis is proposed, which can significantly reduce data redundancy and do not cause information loss of the final stitching result. Based on the weight topology graph, we employ the shortest path algorithm to find the optimal reference image, which makes alignment error accumulation minimum. Finally, to obtain a good balance between the global consistency and the alignment accuracy, a two-step global alignment strategy is proposed, which employs the similarity model to robust align between groups and then perform shape optimization further by the homography model under the anti-perspective function. The experimental results, including the comparison of topology structure analysis, visual effect, running time, information loss ratio, qualitative evaluation and the alignment accuracy between different methods, show that the proposed method successfully reduces image redundancy, while improving the alignment and reducing shape distortion for massive images stitching.
XIN WEI is currently pursuing the M.S. degree with the School of Computer and Control Engineering, Yantai University.
His research interests include computer vision research, image processing, and pattern recognition.