A ROBUST MATCHING METHOD FOR UNMMANED AERIAL VEHICLE IMAGES WITH DIFFERENT VIEWPOINT ANGLES BASED ON REGIONAL COHERENCY

One of the main challenges confronting high-resolution remote sensing image matching is how to address the issue of geometric deformation between images, especially when the images are obtained from different viewpoints. In this paper, a robust matching method for Unmanned Aerial Vehicle images of different viewpoint angles based on regional coherency is proposed. The literature on the geometric transform analysis reveals that if transformations between different pixel pairs are different, they can't be expressed by a uniform affine transform. While for the same real scene, if the instantaneous field of view or the target depth changes is small, transformation between pixels in the whole image can be approximated by an affine transform. On the basis of this analysis, a region coherency matching method for Unmanned Aerial Vehicle images is proposed. In the proposed method, the simplified mapping from image view change to scale change and rotation change has been derived. Through this processing, the matching between view change images can be converted into the matching between rotation and scale changed images. In the method, firstly local image regions are detected and view changes between these local regions are mapped to rotation and scale change by performing local region simulation. And then, point feature detection and matching are implemented in the simulated image regions. Finally, a group of Unmanned Aerial Vehicle images are adopted to verify the performance of proposed matching method respectively, and a comparative analysis with other methods demonstrates the effectiveness of the proposed method. * Corresponding author


INTRODUCTION
Image matching is an important issue in the photogrammetry field and an academic hotspot for research.It is one of the key technologies in target recognition, three-dimensional reconstruction, image retrieval (Gruen A and Zhang L, 2002;Richard et al., 2010).In the field of remote sensing, the performance of image matching determines the subsequent quality of digital surface models generated automatically, semiautomatic three-dimensional measuring of surface features, and automatic aerial triangulation results.Image matching can promote the applications on the update of large-scale maps, urban three-dimensional geological modelling (Gruen A and Akca D, 2005), urban planning, agricultural monitoring, etc.In order to find homologous point candidates across the images, many approaches for matching have evolved over the years.Ida J and Clive S F address image scanning for feature point detection, and specifically an evaluation of three different interest-point operators to support feature-based matching in convergent, multi-image network configurations (Ida J and Clive S F, 2010).Chunsun Z and Clive S F have put forward an automated registration of high-resolution satellite images based on a hierarchical image matching strategy and the similarity of grey levels (Chunsun Z and Clive S F, 2007).Wang R R proposed an automatic region registration method based on spatially assistant plane (Wang R R et al., 2010), but the accuracy will decrease when the reference image and the image to be matched have big difference in angle and resolution.Although automated approaches have quite a number of advantages, the quality of the results is still not satisfactory and, in some cases, far from acceptable.Even with the most advanced techniques, it is not yet possible to achieve the quality of results that a human operator can produce (Gruen A, 2012).
As to the different viewpoint images, the similarity of the same object on images will become smaller with the increasing of view change.Furthermore, due to the instability of the remote sensors caused by their flight platform at different spatial positions, there exist translation, rotation, scale and perspective changes in the external parameters between Unmanned Aerial Vehicle images.This results in image matching failure.Therefore, how to match the Unmanned Aerial Vehicle images is still not fully resolved and has drawn great attention from the international research community.At present, the pointmatching algorithms are widely used, which are based on the assumption that similar neighboring pixels are disparity, and thus are difficult to qualify the matching on Unmanned Aerial Vehicle images.For the matching of Unmanned Aerial Vehicle images which have different viewpoint angles, there are two major difficulties: (1) The changes of remote platform altitude, roll angle and pitch angle will result in scale difference between the images, which reduces the success rate and reliability of image matching algorithms; (2) Multi-angle imaging will produce radiation distortion.Thus, there exits serious chromatic aberration between homogeneous points, which will result in the failure of image matching.Therefore, in order to achieve the robust matching of Unmanned Aerial Vehicle images with multi-angles, it is necessary to analyze the imaging model to determine the geometric transformation model between Unmanned Aerial Vehicle images, and chromatic aberration between homogeneous points need to be adjusted.
For the above two issues, this paper proposes a new method based on remote sensing image matching perspective transformation model and invariant features to find a feasible solution for robust matching of Unmanned Aerial Vehicle images with different viewpoint angles.

RELATED WORK
Image matching is probably the most important function in digital photogrammetry and also in automated modelling and mapping (Gruen A, 2012).For the matching of different viewpoint images, the traditional methods are to improve the affine invariance of feature detector or descriptor, such as SIFT, Harris-Affine, Hessian-Affine (Mikolajczyk et al., 2004), MSER and so on.In (Lowe, 2004), a Gaussian weighting function is used to assign a weight to the gradient magnitude of each sample point when computing SIFT descriptor.It gives less emphasis on gradients that are far from the center of the descriptor.As a result, the problem caused by SIFT without affine invariance can be offset partially.In (Mikolajczyk et al., 2004), a set of initial points extracted at their characteristic scales based on the Harris-Laplace detection scheme are input to Harris-Affine detector, and an iterative algorithm is applied to adjust the location, scale and local area of every point so as to get an affine invariant point.Through the imitation of Harris-Affine, another affine invariant detector Hessian-Affine is proposed.The difference is that it starts from the Hessian rather than the Harris corners.In (Matas et al., 2004), the concept of terrain watershed is introduced to extract MSERs.The MSERs are the parts of the image where local binarization is stable over a large range of thresholds.The definition of MSER stability based on relative area change is invariant to affine transformations.
In recent years, another feasible solution to cope with the change of view in image matching is simulating the original image to every possible view, extracting features and matching respectively.In (Yu et al., 2009;Morel et al., 2009), (Morel et al., 2009) have proposed a full affine invariant framework ASIFT for different viewpoint images matching (shown in Fig. 1).ASIFT simulates the reference image and the test image to cover the whole affine space.Then, SIFT is used to extract and compare the features from these simulations.After feature matching, the correspondent pairs are converted to the original images.ASIFT can find matches from the images even if they are from different viewpoints.In (Yu et al., 2012), another matching framework Iterative SIFT (ISIFT) for different viewpoint images is proposed (shown in Fig. 2).In Fig. 1 and Fig. 2, the square images A and B represent the compared images, and the square image B′represents the estimated image about the transformed pose.Through the iterative algorithm, the geometric transformation between the image pair is estimated.According the estimated model, the test image (or the reference image) is simulated.Then the reference image (or the test image) is matched with the simulated image.And the matching results are converted to the original images.

PROPOSED VIEW INVARIANT IMAGE MATCHING METHOD
Based on the geometric transform analysis, it can be drawn that for the same real scene, if the instantaneous field of view or the target depth changes are small, transformation between pixels in the whole image can be approximated by an affine transform.Otherwise, transformation between different pixel pairs is different.And they cannot be expressed by a uniform affine transform.On the basis of this analysis, a region congruency matching method for Unmanned Aerial Vehicle images with different viewpoint angles is proposed.In the proposed method, the simplified mapping from image view change to scale change and rotation has been derived.Through this processing, the matching problem between view change images can be converted into the matching between rotation and scale changed images.

Analysis of transformation between different viewpoint images
The transformation between a point on the ground and its corresponding appearance on the image can be described by the Holes perspective projection imaging model.The relationship between the images coordinates of a pixel and its corresponding world point's coordinates can be expressed as Where s denotes the depth coordinate in projection direction of the camera coordinate system, the matrix M 1 is the internal parameters of the camera including focal length, resolution and the image plane offset, and the matrix M 2 represents the translation and rotation between the camera coordinate system and the world coordinate system called extrinsic parameters.
When a point on the ground is viewed from different angles, the relationship between the pixels coordinates of the point in the images can be obtained by the camera imaging model as shown in the following Formula.
The camera intrinsic parameters and extrinsic parameters are equal for every point, which can be seen as constant.But the coefficients ij a and i b in equation ( 2) are still related to the depth coordinates 1 s and 2 s .Obviously, when the ratio 1 2 s s of the two corresponding pixels in the image pair is constant, the coefficients in Formula (2) are constant.Then the geometric relationship between all pixels in the image pair can be expressed by an affine transform.In practice, if the fields of view of the cameras are small or the depth variation in local object is negligible, the projection transformation of the corresponding regions can be approximated by an affine transform because 1 2 s s is proximate to constant.
According to the analysis above, the geometric transformation between the corresponding local regions shown in Fig. 1 can be approximated by an affine transform that is expressed by matrix A (Yu et al., 2009), Where In this paper multiple local areas from the two images are extracted using the method.In these local areas, the view field and target depth variation are small, so each local area can be approximated with an affine transformation model accordingly.This paper defines the characteristics of the local areas that can be approximated with an affine transformation model region congruency.
In Fig. 3 the two elliptical areas are the two corresponding local areas with region congruency which are extracted and simulated from images of different viewpoint angles.And the two local elliptical regions are transformed to circular areas respectively, the matrix B is used to describe the transformation between them.According to (Baumberg, 2000), where ' L  and ' R  denote the second-order moments of the two circular areas.For the circular fields, It can be deduced from the Formula (4) and ( 5) that ( ) Therefore, there are only scale change and rotation between the two areas, which could be mathematically expressed by Besides, this conclusion can be illustrated by conducting experiments on a pair of images taken from two different viewpoints (see Fig. 4).

Region detection and extraction:
In order to obtain the circular areas, local regions need to be extracted firstly from the images.Currently, there are many well-known regional feature extraction algorithms such as SR (Kadir et al., 2004), IBR (Tuytelaars et al., 2004), EBR (Tuytelaars et al., 2004), MSER (Matas et al., 2004), etc.In this paper the purpose of detecting the local area is to match images with large viewpoint angle change, so the local area feature detection methods used need to have a strong robustness to the change of viewpoint angle.(Mikolajczyk et al., 2005) shows that MSER has strong robustness to the change of viewpoint angle, so in this paper MSER operator is selected to extract image features of local area.
In the original MSER algorithm, although the extracted MSERs come in many different sizes, they are all detected at a single image resolution.When a scene is blurred or viewed from increasing distances, many details in the image disappear and different region boundaries are formed.In this case, the repetition rate of the local area extracted from different images obtained will decrease, which will affect the subsequent image matching.In order to improve the scale invariance of MSER, this paper uses the multi-resolution strategy to detect MSERs from different resolutions instead of detecting MSERs only in the original input image (Forssen et al., 2007).Specifically, the procedure adopted in our method is described as follows: (1)Firstly, a scale pyramid is constructed by blurring and subsampling with a Gaussian kernel.
(2)Then, MSERs are detected separately at each resolution image according the method proposed in (Matas et al., 2004).
(3)Finally, duplicate MSERs are removed by eliminating fine scale MSERs with similar locations and sizes as MSERs detected at the next coarser scale.

Region transformation:
The detected elliptical multiresolution MSERs are transformed to circular areas (called p CA ) according to the values of macro axis and minor axis by the method as follows: Assuming the macro axis of Elliptical Area (called p EA ) is l and its minor axis is w , the radius of the transformed circular area is calculated with r l w   .Matrix H is used to express the geometric transformation between the elliptical region and the circular area, so H satisfy the formula 2 ( -) ( -) where X is a point on the ellipse, and g X is the center of the ellipse.Since X is on the ellipse, thus Then, the elliptical region can be mapped in a circular area with the centre X g and the radius r by the transformation matrix H.
Thus the detected elliptical multi-resolution MSERs are transformed to circular areas according to the following equations.

Feature extraction and image matching:
In this paper the scale invariant DoG detector is selected to extract the features and the SIFT feature descriptor is chosen to describe and obtain the initial matching results.
It is Inevitable that there are some incorrect correspondences in the initial results.The traditional method is to estimate the homography between the two images by using RANSAC algorithm.Those initial matches that do not conform to the estimated model should be eliminated as false matches.Generally the geometric transformation between all pixels of the two images cannot be approximated by an affine transform.In this condition, if the homography is used to identify wrong matches, many correct matches will be eliminated as a result.In this paper, in order to avoid the incurrence of this problem, epipolar constraint based on the fundamental matrix is used to eliminate wrong corresponding pairs with RANSAC.Fig. 6 is the flowchart of the proposed matching method.

Data sets
In order to evaluate the effectiveness of the proposed feature matching method, a pair of Unmanned Aerial Vehicle (UAV) images is selected as shown in Fig. 6.

Experimental results and analysis
Among all the feature-based matching methods, SIFT and its two improved methods ISIFT and ASFT perform well in dealing with images taken from different viewpoints.Therefore, comparative experiments are conducted using these methods based on the above images.For the four approaches, matching results can be visual displayed in Fig. 8. Besides, the correct number of matches can be counted and shown in Table 1.
Matching method Data of Fig. 7 SIFT 48 ISIFT 144 ASIFT 274 Proposed method 750 Table 1.Statistical results of SIFT, ISIFT, ASIFT and Proposed method As can be seen from the results in Table 1: (1) To all the image pairs shown in Fig. 8, the proposed method obtained the largest number of matching features compared to SIFT, ISIFT and ASIFT, while the proposed method got a minimum of correct matching features.
(2) Based on SIFT matching method, the ISIFT method is established by estimating the transformation matrix to generate simulated images, the feature matching is implemented on simulated images, which can obtain more correct features than those of the SIFT matching method.
(3) In Fig. 7, the reference image and the image to be matched are real images captured from different perspectives.As the change of scene depth of the image features becomes very large, the entire two images do not obey the same affine transformation model, so transformation model of the ISIFT method obtained by iterative procedure only covers part of the image area, and cannot get a consistent simulated image with the real scene model based on the affine transformation model.Especially in Fig. 3, the ISIFT method does not improve the results of the SIFT matching method, and similar to the SIFT method, the correct matching features are very limited.
(4) Compared to the SIFT and the ISIFT methods, the ASIFT and the proposed methods have gained better matching results, which indicate that both the ASIFT and the proposed methods are more robust than the SIFT and the ISIFT methods for different viewpoint UAV images.
(5)For UAV images, the number of correct matching features acquired by ASIFT algorithm is far less than that of the proposed method.
The reason for this result is as follows: in the UAV images with complex scenes, there is a sampling interval when ASIFT method simulates the affine space.This causes the entire space to be discontinuous, and some of the planar region is not covered completely by the simulated affine space, so ASIFT method obtained fewer features than those of proposed method.The proposed method simulates each regional planar separately, so it can cover planar features better within the image area.These simulated regional planar regions guarantee that more points features will be matched.
(6)Experimental results of UAV images showed that the proposed method can obtain better matching results for different perspective images with undulating terrain.

CONCLUSIONS
In this paper a novel matching method on multi-view UAV images is proposed.The proposed method composed of three steps: region detection and extraction; region transformation; feature extraction and image matching.The robustness of local area detection determines the success of the whole image matching process.In the proposed method, the simplified mapping from image view change to scale and rotation change has been derived.Through this processing, the matching problem between view change images can be converted into the matching between rotation and scale changed images.A view change image matching method based on local image region simulation has been put forward.Then, point feature detection and matching are implemented in the simulated image regions.The proposed method simulates each regional planar separately, so it can cover planar features better within the image area.These simulated regional planar regions guarantee that more points features will be matched.Meanwhile, in order to detect the homogeneous points covering the whole image in order to realize the better matching, we also detect the homogeneous points outside the regional area and matching procedure is executed in these areas.Experimental results of UAV images showed that the proposed method can obtain better matching results than those of SIFT, ASIFT and ISIFT.

Fig. 3
Fig.3 Transformation between a pair of correspondent local areas a) & (b) are ellipse fitting results of the detected regions from two viewpoints.(c)& (d) are normalization regions by transforming elliptical regions to circular areas.
the second-order moment of the elliptical region.Calculate the equations (8) and (9

Fig. 6
Fig.6 Flowchart of the proposed matching method Fig.8 Matching results of Fig. 7