Abstract
Using visual odometry and inertial measurements, indoor and outdoor positioning systems can perform an accurate self-localization in unknown, unstructured environments where absolute positioning systems (e.g. GNSS) are unavailable. However, the achievable accuracy is highly affected by the residuals of calibration, the quality of the noise model, etc. Only if these unavoidable uncertainties of sensors and data processing can be taken into account and be handled via error propagation, which allows to propagate them through the entire system. The central filter (e.g. Kalman filter) of the system can then make use of the enhanced statistical model and use the propagated errors to calculate the optimal result. In this paper, we focus on the uncertaintiy calculation of the elementary part of the optical navigation, the template feature matcher. First of all, we propose a method to model the image noise. Then we use Taylor’s theorem to extend two very popular and efficient template feature matchers sum-of-absolute-differences (SAD) and normalized-cross-correlation (NCC) to get sub-pixel matching results. Based on the proposed noise model and the extended matcher, we propagate the image noise to the uncertainties of sub-pixel matching results. Although the SAD and NCC are used, the image noise model can be easily combined with other feature matchers. We evaluate our method by an Integrated Positioning System (IPS) which is developed by German Aerospace Center. The experimental results show that our method can improve the quality of the measured trajectory. Moreover, it increases the robustness of the system.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Uncertainty occurs in almost every element of a computer vision system. For example, the measurements of sensors contain noise, their calibration is affected with errors, etc. In order to archive high accuracy, the uncertainty of each element should be taken into account in computer vision systems. Only if the uncertainties are propagated correctly through the whole system, the central filter (e.g. Kalman filter) of the system can make use of them to calculate the optimal result by a statistical model.
In this paper, we focus on the uncertainty of the image (i.e. image noise) and the uncertainty of the sub-pixel template feature matching. We propose an image noise model which can be used for real-time processing. On the other hand, the noise model can be combined with our proposed sub-pixel matching algorithm to calculate the uncertainty of template matching without a significant computational overhead. Particular uncertainty calculation methods are presented for the SAD and NCC matcher. However, it can easily be ported to other template feature matching algorithms.
Feature extraction and feature matching are elementary parts in many computer vision applications, such as optical navigation system (e.g. SLAM [4, 20, 24], IPS [10, 11]). In these applications, features are extracted from one image by a feature extractor (e.g. FAST, AGAST [23, 26, 32]). A feature matcher (e.g. SAD, NCC, KLT [12, 21, 28]) matches the features to another image. The image noise affects the matching result and therefore influences the performance of the whole system. Many researches focus on the problem of feature uncertainties [15, 16, 27, 31].
In [31] the authors present a framework to calculate the uncertainty of scale invariant features (SIFT) [19] and speeded up robust features (SURF) [3]. In that paper, the uncertainty of features are depend on the scale and the neighborhood. The results of the experiments show that the proposed method improves the performance for bundle adjustment. However, due to the complexity of calculating SIFT features, this method possibly may not be used for real-time computer applications, especially, for mobile platforms.
In [16] the authors proposed a method for involving the uncertainty of features into a homography and fundamental matrix calculation. The paper shows that in most of the cases the results can be improved by considering the covariances of the features. However, because of lacking a noise model, the method can only get rough uncertainties of features.
In our method, we assume that the covariance for the feature extraction step is zero, because the template feature matcher gets pixel by pixel matching results. Therefore the uncertainties of feature extraction can be omitted and the noise in both images impacts the feature matching step. By combined with our proposed image noise model, the template feature matcher provides matching results and covariance matrices with values propagated from image noise. This approach simplifies the calculation and enables real-time processing without losing accuracy. As shown in Sect. 4, the uncertainty can be used to identify and eliminate features with high uncertainties, usually indicating mismatched features. The most important benefit is that the uncertainty of the matching can be involved in further calculations, e.g. triangulation, ego-motion calculation of the system, etc. The propagation of the uncertainties through the whole calculation chain to the central filter significantly increases the stability and the quality of the results.
This paper is organized as follows: In Sect. 2 an introduction to our image noise model is given. Such a model can be used to calculate the uncertainties of sub-pixel matching results as well as for further processing. In Sect. 3 a sub-pixel template matching algorithm and a method to propagate uncertainties from image noise to sub-pixel matching results are described. Experimental results are presented in Sect. 4, and Sect. 5 concludes the paper.
2 Image Noise Model
Even though it varies between cameras and scenes, image noise is always present in images taken with digital cameras. There are two major sources of noise. Firstly, fixed pattern noise is caused by different light sensitivities (photo response non-uniformity - PRNU [2]) and signal offsets (dark signal non-uniformity - DSNU [8]) of the pixels of an image sensor. This noise does not change over short time and is usually corrected by the camera itself. Secondly, dynamic noise changes from image to image even without a change of the input signal. It is mainly caused by the read-out electronics (read-out noise [25]), but also by the stochastic nature of the incoming photons (photon noise). This is just a simple model, more accurate one’s can be applied if needed.
To control the impact of image noise to image processing, it can be taken into account as the uncertainty of an image. Such uncertainties can be handled via propagation of uncertainties, which allows to propagate them through the entire computer vision system. In order to achive this goal, the mathematical model of the image noise must be known. There are many noise models proposed in computer vision community, e.g. [5]. However, they either do not quantify the noise (e.g. Salt and Pepper Noise [14]) or are built as common probability models (e.g. Gaussian Noise [9]), which do not take into account the variation of different camera systems.
In this paper, we propose a method to build an image noise model which is suitable for real-time processing. We assume that the parameters of that noise model are different for each camera so that the model building step can be done by camera calibration step. During the calibration, by taking a batch of M frames (e.g. 100) in a short time (less than 1 min) from a fixed scene, M almost identical images are received. The mean value and standard deviation of all pixels are calculated for all values measured during the M frames. Hence:
where \(g(i,j)_m\) is the locale gray value at coordinate (i, j) for the frame m. The standard deviation of this input vector can be easily calculated. The relation between all mean values and standard deviations can be displayed in a graph, which is done in Fig. 1 for a set of sample images. The standard deviations can be seen as a function of the corresponding mean values. It is obvious that the standard deviation grows according to the mean gray value. This reveals the relation between pixel noise and the gray value of the pixel. On the other hand, because of electronic noise, the standard deviation should be greater than zero even for dark pixels. Based on this, we propose a noise model for each pixel.
\(N_E^2\) is the variance of electronic noise of camera in gray scale values, the second part is from shot noise (\(\text {Shot noise} = \sqrt{signal}\) [13]), here I is the gray value of the pixel and G is a gain parameter. Next, a Gauss–Newton algorithm is used to estimate the model with the mean values and standard deviations from the calibration images, fitting the curve (2) to the data. Equation (3) shows the Gauss–Newton algorithm.
The Gauss–Newton algorithm iteratively calculates the results. In Eq. 3, \(\beta \) is the vector of variables to calculate (in our case \(N_E\) and G), the superscript s indicate the sth iteration value. \(J_r\) is the Jacobian matrix of a residual function \(r(\beta )\), where
In our case y is the calculated standard deviation of the gray values of the image set, and the \(f(I,\beta )\) is the proposed noise model (Eq. (2)). Starting with an initial \(\beta ^{(0)}\), the Gauss–Newton algorithm can get a convergent \(\beta \) (Fig. 1) after several iterations. More details can be found at [22]. Knowing the parameters G and \(N_E\) makes the noise model available. This noise information can be used in the further processing, e.g. to model the uncertainty of template matching introduced in Sect. 3.2.
3 Error Model for Template Sub-pixel Matching Algorithm
Once the image noise model is known, it can be involved into data processing chain to get the uncertainty information. In this section, we focus on the uncertainty of feature matching result based on proposed noise model. The calculated uncertainty of matching result can be easily propagated to further calculation.
The SAD and NCC template feature matchers [1, 6] are extensively used in stereo feature matching, feature tracking, etc. Therefore in this paper, these two feature matchers are used. Moreover, we extend them by Taylor’s theorem to get sub-pixel level matching results. Sub-pixel matching algorithm is a well-researched topic in computer vision community, many sub-pixel matching algorithms are proposed [17, 21, 30]. Besides, descriptor based feature extraction algorithm [3, 18, 19] can get sub-pixel matching result as well. All of these methods can get good sub-pixel matching results. However, these methods need iteration calculation, image pyramid, difference of Gaussian, brute-force search respectively, which are too “heavy” for low computational resource platform. Our method is derived from polynomial interpolation which is faster and easy to implement and can get acceptable results, on the other hand, the uncertainty of matching result can be calculated by combination with our noise model.
3.1 Template Sub-pixel Matching Algorithm
The sub-pixel matching algorithm includes three steps. At first, the common SAD/NCC template feature matcher is used to match the features, the details are shown below. For each extracted feature on the first image, a \(5\times 5\) pixel region around it is taken as a template. Then the template is shifted over the second image pixel by pixel. For each position, the SAD/NCC value between the template and covered area in the second image is calculated. The search area in the second image is limited by some conditions (e.g. epipolar line). The position in the second image with the lowest SAD value or the highest NCC value is the matched feature. This step outputs the matching results with pixel-level coordinates.
Secondly, in case that the search area in the first step does not cover all the \(3\times 3\) pixel neighbor positions around the matched feature (e.g. epipolar line case), the SAD/NCC values for these positions need to be calculated by the same template. Then a \(3\times 3\) SAD/NCC matrix \(\varvec{A}\) can be picked up, as shown in Fig. 2.
In the third step, the values of matrix \(\varvec{A}\) can be seen as a surface in 3D space with the central element located in a valley (SAD) or on a hill (NCC). Using a two-dimensional interpolation of the values allows to get a more precise position of the surface’s extremum: the sub-pixel position of the matched feature. Two-dimensional interpolation follows the second-order Taylor series in Sect. 3.1.
In Sect. 3.1, f(x, y) is the matrix \(\varvec{A}\), where x and y are the feature coordinates. If the feature coordinate is expressed in vector form \(\varvec{x} = \begin{bmatrix} x&y\end{bmatrix}^\mathsf {T}\), then the equation can be written as:
The local extremum is calculated by Eq. (7)
this leads to:
Equation (8) outputs two real numbers, taken as an offset for the feature coordinate x and y. Adding these two values to the feature pixel level coordinate will get the sub-pixel coordinate of the feature.
3.2 Propagation of Uncertainty of the SAD Feature Matcher
In this subsection, the proposed image noise model is applied to the SAD sub-pixel matching algorithm. The aim is to have an uncertainty model of the matching procedure. The equation of SAD is shown as follows.
\(\varvec{S}(u + i, v + j)\) is the search area in the second image, \(\varvec{T}\) is the template from the first image. For each matched feature pair, the SAD propagation of uncertainties algorithm includes two parts. The first part propagates the image noise to the uncertainty of the \(3\times 3\) matrix \(\varvec{A}\). The second part handles the uncertainty of \(\varvec{A}\) and finally gets the uncertainty of the sub-pixel matching result.
Part 1: From the linear propagation of uncertainties theory [7] it is known that in order to propagate the uncertainty; a matrix \(\varvec{F}\) which can linearize the SAD calculation must be known.
Referring to Fig. 2, the \(9 \times 1\) vector \(\varvec{a}\) of Eq. (10) is the reformatted \(3 \times 3\) matrix \(\varvec{A}\). The vector \(\varvec{v}\) includes all the pixel values from the template and the \(7 \times 7\) search area on the second image. The method for specifying \(\varvec{F}\) is described in the following.
The template from the first image is reformatted, from a \(5\times 5\) matrix \(\varvec{V}_f\) into a \(25\times 1\) vector \(\varvec{v}_f\). The corresponding \(7\times 7\) area \(\varvec{V}_s\) (see Fig. 2) in the second image is reformatted into a \(49\times 1\) vector \(\varvec{v}_s\).
By concatenation of these two vectors, a \(74\times 1\) vector \(\varvec{v}= [\varvec{v}_s \ \ \varvec{v}_f]^\mathsf {T}\) (see Fig. 3) is defined and used for further calculations. Furthermore, a \(9\times 74\) matrix \(\varvec{F}\) (see Fig. 3) is built. In the first, \(\varvec{F}\) is set to be a zero matrix (i.e. all of the entries equals zero). Then, some entries of \(\varvec{F}\) are calculated as follows:
where \(\varvec{V}_f(i,j)\) indicates the entry which is located in the i th column and j th row in matrix \(\varvec{V}_f\).
The \(9\times 49\) matrix \(\varvec{F}_s\) is the left part of \(\varvec{F}\) and the \(9\times 25\) matrix \(\varvec{F}_f\) is the right part, as shown in Fig. 3. Parameters m and n are shift coordinates of the template \(\varvec{V}_f\) over the search area \(\varvec{V}_s\) in horizontal direction and vertical direction, respectively.
For example, the blue area in Fig. 2(b) indicates the shift coordinate (1, 1), and the red area indicates (2, 2). The sign function \(\mathrm {sgn}\) is defined as follows:
The template \(\varvec{V}_f\) can only cover a part of search area \(\varvec{V}_s\), for each (m, n); some entries in \(\varvec{F}_s\) remain unchanged as 0. Therefore, the values in \(\varvec{F}\) are from the set \(\{-1, 0, +1\}\). Finally, we obtain a dynamically changing matrix \(\varvec{F}\), as the values of \(\varvec{F}\) are different for different features. These steps guarantee that the vector \(\varvec{a}\) in Eq. (10) is always identical to the standard absolute calculation results.
Once we know the matrix \(\varvec{F}\), the SAD algorithm becomes a linear calculation. From our proposed noise model, the noise vector \(\varvec{v}_{n}\) of all pixels in \(\varvec{v}\) can be calculated. Assuming, that the noise level between each pixel is uncorrelated, a \(74\times 74\) covariance matrix \(\varvec{\varSigma _v}\) is defined. Its diagonal elements are \(\varvec{v}_{n}^2\), and the others are 0. The covariance of \(\varvec{a}\) can be calculated as:
Hence, \(\varvec{\varSigma _a}\) is the \(9\times 9\) covariance matrix of \(\varvec{a}\).
Part 2: the task of the second part is the propagation of uncertainties from \(\varvec{\varSigma _a}\) to the final sub-pixel calculation results. From Eqs. (6) and (8), the sub-pixel calculation step is a non-linear function. The Jacobian matrix of Eq. (8) is calculated. First of all, Eq. (8) can be written as:
where
\(a_{ij}\) is the element of matrix \(\varvec{A}\) located in row i and column j. The Jacobian matrix is calculated as:
\(\varvec{J}\) is a \(2\times 9\) matrix, so the last step of calculation is:
\(\varvec{\varSigma }_{\delta \varvec{\hat{x}}}\) is a \(2\times 2\) covariance matrix, the diagonal elements are the variance values of the matched sub-pixel coordinate, and the off-diagonal entries are the covariance values of the sub-pixel coordinates. This matrix finally includes the uncertainty information propagated from the image noise to the SAD sub-pixel matching result and can be used for further processing.
3.3 Propagation of Uncertainty of the NCC Feature Matcher
The normalized cross correlation (NCC) template matching algorithm is very similar to the SAD algorithm, NCC uses the Eq. (18) to calculate the normalized cross correlation between the templates from both images.
In the equation, \(\varvec{S}(u + i, v + j)\) is the search area in the second image, \(\varvec{T}\) is the template from the first image, where \(n = i\cdot j\) and \(\varvec{\bar{S}}\), \(\varvec{\bar{T}}\) are the means of the template and the search area. The standard deviation of \(\varvec{S}\) and \(\varvec{T}\) are represented by the symbols \(\sigma _{\varvec{S}}\), \(\sigma _{\varvec{T}}\). Because of the non-linear nature of NCC, the error propagation for the \(3 \times 3\) NCC matrix \(\varvec{A}\) needs to be done by a Jacobian matrix. Considering an example, the template size is the same as in Fig. 3. Hence, the Jacobian matrix is calculated as follows:
f is the common NCC equation given in Eq. (18), \( p_{ln} \) are the pixel values from the first image template and \(p_{rn}\) are the pixel values in the \(7 \times 7\) search area in the second image. \( f_1 \) maps the template and the search area to the first entry of the \( 3\times 3 \) NCC matrix and so on. The usage of the \( 9\times 74 \) matrix \( \varvec{J}_{ncc} \) is same to the usage of \(\varvec{F}\) in Sect. 3.2. After the calculation of the covariance matrix \(\varvec{\varSigma _a}\), the remaining steps are identical with part 2 in Sect. 3.2. To avoid numerical errors, we recommend to use the Matlab symbolic calculation to get the final format of \( \varvec{J}_{ncc} \).
4 Experiment Results
To check the quality of the proposed method, we designed three experiments. Without loss of generality, the first experiment quantitatively verifies the proposed algorithm. The second one is a general feature matching test, and the third experiment checks the proposed method on an optical navigation system IPS [11], which was developed at the German Aerospace Center (DLR).
The first test is designed to verify the uncertainties propagation. The testing method is similar to a Monte Carlo test. The difference is that we do not generate artificial noise and add to image. Instead, we take a set of images, these images include noise from the camera system. The details of the test are described below.
The first step is similar to the noise model calculation step: The stereo camera system takes 100 image pairs from a fixed scene in a short time. The image contents are almost the same, but affected by noise from the camera system. Next, a feature extractor detects features from the first left image, then a sub-pixel template matcher matches the features to the first right image (without uncertainties propagation step). This step is repeated for all of the image pairs, but the feature extraction step is skipped. Instead, the coordinates of features in the first left image are used. As features are located in the same coordinate during 100 frames, there are 100 different stereo matching results, the standard deviation of the resulting sub-pixel offset is calculated and drawn as a curve in Fig. 4. Now these empirical results can be used to compare them with the propagated uncertainty which is calculated in the next step.
In the second step, only the first image pair is needed. We apply the sub-pixel template matcher and involve the propagation of uncertainties step in the first image pair. This way we get the uncertainties of matched features which are propagated from image noise. The Fig. 4 indicates that the noise model and propagation of uncertainties algorithm shows an accurate reflection of the real uncertainties of the matching results.
The second experiment check the performance of the proposed algorithm on stereo feature matching problem. First, the sub-pixel matching algorithm works on a stereo camera system. The noise model of the camera system is already calculated using our method. Figure 5 shows the images from left and right camera respectively. We use the AGAST [23] feature extractor to get features from the left image, and a SAD template matcher to match the features to the right image under epipolar line constraint. The green and orange crosses on the left image are the features successfully matched to the right image by a common SAD template matcher, and the yellow and cyan crosses symbolize mismatches. However, the orange crosses are the features filtered out by our proposed sub-pixel matching algorithm because of their high uncertainty (higher than 0.4 pixel). And in fact the orange features in Fig. 5 (near the cupboard and the white computer monitor) actually cannot be seen from the right camera’s perspective. However, the common template matcher wrongly matches them to the right image (not drawn). Finally, the crosses on the left and right image are the features successfully matched after the sub-pixel matching step. This test shows that with the propagated uncertainty it is possible to filter for mismatched features, proving the correctness of the proposed algorithm from another perspective. The algorithm also improves the robustness of the system by filtering the mismatched features.
The last test based on an optical navigation project. The test platform is IPS. IPS was developed for real-time vision-aided inertial navigation [10, 11], especially for an environment where GNSS is not available. The IPS is a Kalman filter based optical navigation system, in the previous version, the normal NCC and SAD feature matcher are used for stereo matching and tracking respectively. The matching results are integral pixels. On the other hand, because lacking noise model, we cannot get uncertainties of matching results. However in order to use Kalman filter, the uncertainties information must be provided, therefore in the previous version, only a rough uncertainty of matching result (e.g. quantization error (\(\frac{1}{12}\)) [29]) are given. These problems can be solved by our proposed method. This experiment shows the comparison of measured trajectory in previous IPS version and the IPS combined with our noise model and sub-pixel algorithm.
For test purposes, a dataset is recorded by walking with the IPS through a realistic scene with a length of about 410 m. Such a physical run is called a session. We recorded eight sessions in total. Because the lack of ground truth, the start and end position of the loop are exactly the same. As the system does only consider the motion information extracted from two consecutive image pairs, and it does not recognize that it has been in a place before, the performance of the system can be measured by the error between the known start position and the calculated end position. As a RANSAC algorithm is used for optical navigation, the calculated positions have a random component. This is why each session is processed (offline) 50 times to calculate the root-mean-square (RMS) of trajectory errors as a final result. More details about the test procedure can be found in [32].
The IPS gets state of the art results, the trajectory error is about \(0.1\%\) of the traveled distance. In this instance, usually it is hard to get improvement. However, as Table 1 shows, the accuracy of the measurement is increased about 12% by our method. On the other hand, the new algorithm also leads to a better standard deviation, the improvement of standard deviation is about 44%, which means an improvement of the robustness of the system.
5 Conclusion
In this paper, we propose a method to model the image noise, this noise model can be retrieved during normal camera calibration step. Based on the noise model, uncertainties propagation for sub-pixel matching algorithms is described. The proposed image noise model and the method to get uncertainty of sub-pixel matching results can be widely used in many computer vision applications. The performance of the proposed methods is evaluated by a full system test. The experimental results show that the noise model is actually able to reflect the uncertainty of sub-pixel matching results. An additional test shows that the uncertainty calculation can even be utilized as a mismatched filter without any computational overhead. The last test concentrates on the performance of the new algorithm combined with an optical navigation system. The result proves that the proposed method decreases the trajectory errors and standard deviation of errors simultaneously. The test shows our method can get significantly better results without much effort. In our future work, we will implement the uncertainties propagation method for other sub-pixel matching algorithms.
References
Alsaade, F.: Fast and accurate template matching algorithm based on image pyramid and sum of absolute difference similarity measure. Res. J. Inf. Technol. 4(4), 204–211 (2012)
Amerini, I., Caldelli, R., Cappellini, V., Picchioni, F., Piva, A.: Estimate of PRNU noise based on different noise models for source camera identification. IJDCF 2(2), 21–33 (2010)
Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32
Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: Intelligent Robots and Systems (2015)
Boyat, A.K., Joshi, B.K.: A review paper: noise models in digital image processing. Sig. Image Process.: Int. J. 6(2), 63–75 (2015)
Brunelli, R.: Template Matching Techniques in Computer Vision: Theory and Practice. Wiley, Hoboken (2009)
Clifford, A.: Multivariate Error Analysis: A Handbook of Error Propagation and Calculation in Many-Parameter Systems. Wiley, Hoboken (1973)
Evtikhiev, N.N., Starikov, S.N., Cheryomkhin, P.A., Krasnov, V.V.: Measurement of noises and modulation transfer function of cameras used in optical-digital correlators. International Society for Optics and Photonics (2012)
Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall, Inc., Upper Saddle River (2006)
Grießbach, D.: Stereo-vision-aided inertial navigation. Ph.D. thesis, Freie Universitt Berlin (2014)
Grießbach, D., Baumbach, D., Zuev, S.: Stereo-vision-aided inertial navigation for unknown indoor and outdoor environments. In: 2014 IPIN (2014)
Haralick, R., Shapiro, L.: Computer and Robot Vision, vol. 2. Addison-Wesley Publishing Company, Boston (1993)
Holst, G.C.: CCD Arrays, Cameras, and Displays, 2nd edn. Society of Photo Optical, Bellingham (1998)
Jayaraman: Digital Image Processing, 1st edn. Mc Graw Hill India, New Delhi (2009)
Kanatani, K.I.: Uncertainty modeling and model selection for geometric inference. IEEE Trans. Pattern Anal. Mach. Intell. 26(10), 1307–1319 (2004)
Kanazawa, Y., Kanatani, K.: Do we really have to consider covariance matrices for image features? Electron. Commun. Jpn. 86, 1–10 (2003)
Kim, K.B., Kim, J.S., Choi, J.S.: Fourier based image registration for sub-pixel using pyramid edge detection and line fitting. In: Intelligent Networks and Intelligent Systems. IEEE (2008)
Leutenegger, S., Chli, M., Siegwart, R.Y.: BRISK: binary robust invariant scalable keypoints. In: ICCV. IEEE (2011)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Lowry, S., Sunderhauf, N., Newman, P., Leonard, J.J., Cox, D., Corke, P., Milford, M.J.: Visual place recognition: a survey. IEEE Trans. Robot. 32(1), 1–19 (2016)
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI 1981, vol. 2 (1981)
Madsen, K., Nielsen, H.B., Tingleff, O.: Methods for Non-linear Least Squares Problems (1999)
Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 183–196. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15552-9_14
Mourikis, A.I., Roumeliotis, S.I.: A multi-state constraint Kalman filter for vision-aided inertial navigation. In: Proceedings IEEE ICRA (2007)
Nakamura, J.: Image Sensors and Signal Processing for Digital Still Cameras. Optical Science and Engineering. CRC Press, Boca Raton (2016)
Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. Pattern Anal. Mach. Intell. 32(1), 105–119 (2010)
Sheorey, S., Keshavamurthy, S., Yu, H., Nguyen, H., Taylor, C.N.: Uncertainty estimation for KLT tracking. In: Jawahar, C.V., Shan, S. (eds.) ACCV 2014. LNCS, vol. 9009, pp. 475–487. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16631-5_35
Shi, J., Tomasi, C.: Good features to track. In: CVPR (1994)
Stein, S., Jones, J.: Modern Communication Principles: With Application to Digital Signaling. McGraw-Hill, New York City (1967)
Thevenaz, P., Ruttimann, U.E., Unser, M.: A pyramid approach to subpixel registration based on intensity. IEEE Trans. Image Process. 7(1), 27–41 (1998)
Zeisl, B., Georgel, P.F., Schweiger, F., Steinbach, E.G., Navab, N., Munich, G.: Estimation of location uncertainty for scale invariant features points. In: BMVC (2009)
Zhang, H., Wohlfeil, J., Grießbach, D.: Extension and evaluation of the AGAST feature detector. In: XXIII ISPRS Congress Annals 2016, vol. 3, pp. 133–137 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Zhang, H., Grießbach, D., Wohlfeil, J., Börner, A. (2018). Uncertainty Model for Template Feature Matching. In: Paul, M., Hitoshi, C., Huang, Q. (eds) Image and Video Technology. PSIVT 2017. Lecture Notes in Computer Science(), vol 10749. Springer, Cham. https://doi.org/10.1007/978-3-319-75786-5_33
Download citation
DOI: https://doi.org/10.1007/978-3-319-75786-5_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75785-8
Online ISBN: 978-3-319-75786-5
eBook Packages: Computer ScienceComputer Science (R0)