Next Article in Journal
Study on the Non-Steady-State Wear Characteristics and Test of the Flow Passage Components of Deep-Sea Mining Pumps
Next Article in Special Issue
Evaluation of Multi-Source High-Resolution Remote Sensing Image Fusion in Aquaculture Areas
Previous Article in Journal
Control Chart Concurrent Pattern Classification Using Multi-Label Convolutional Neural Networks
Previous Article in Special Issue
Object-Aware Adaptive Convolution Kernel Attention Mechanism in Siamese Network for Visual Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Target Recognition Based on Improved ORB Feature

1
Institute of Information and Communication, Beijing Information Science & Technology University, Beijing 100101, China
2
Key Laboratory of Modern Measurement and Control Technology, Ministry of Education, Beijing Information Science & Technology University, Beijing 100101, China
3
Beijing Tellhow Intelligent Engineering Co., Ltd., Beijing 100176, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(2), 786; https://doi.org/10.3390/app12020786
Submission received: 17 November 2021 / Revised: 6 January 2022 / Accepted: 10 January 2022 / Published: 13 January 2022

Abstract

:
A novel fast target recognition algorithm is proposed under the dynamic scene moving target recognition. Aiming at the poor matching effect of the traditional Oriented Fast and Rotated Brief (ORB) algorithm on underexposed or overexposed images caused by illumination, the idea of combining adaptive histogram equalization with the ORB algorithm is proposed to get better feature point quality and matching efficiency. First, the template image and each frame of the video stream are processed by grayscale. Second, the template image and the image to be input in the video stream are processed by adaptive histogram equalization. Third, the feature point descriptors of the ORB feature are quantized by the Hamming distance. Finally, the K-nearest-neighbor (KNN) matching algorithm is used to match and screen feature points. According to the matching good feature point logarithm, a reasonable threshold is established and the target is classified. The comparison and verification are carried out by experiments. Experimental results show that the algorithm not only maintains the superiority of ORB itself but also significantly improves the performance of ORB under the conditions of underexposure or overexposure. The matching effect of the image is robust to illumination, and the target to be detected can be accurately identified in real time. The target can be accurately classified in the small sample scene, which can meet the actual production requirements.

1. Introduction

As a new subject, machine vision technology has been gradually integrated into people’s lives in recent years. Machine vision has made great progress in both technical and application fields. Target recognition is a typical application of machine vision technology in the industry. In the industrial field, industrial robots with machine vision technology have gradually begun to replace traditional robots. How to identify the workpiece accurately is the focus and difficulty of a vision robot [1,2,3]. It is also the basis of the grasping operation of a vision robot. In 1999, David Lowe put forward the scale invariant feature transform (SIFT) [4] feature matching algorithm based on the existing feature detection methods based on invariant technology SIFT, which has scale invariance. The algorithm is aimed at local feature points of objects. Image size transformation and rotation will not affect the detection results, and it has high robustness to noise and other influences. Bay et al. [5] proposed a fast robust feature extraction registration algorithm Speed-Up Robust Features (SURF) in 2006 based on the SIFT algorithm. Due to the harr feature and the introduction of the integral image, SURF outperforms the SIFT algorithm and can get faster speed.
Rosten et al. [6] proposed the corner in 2006, which can quickly determine the location of feature points. Calonder et al. put forward the brief descriptor to describe the image area around feature points [7]. Rublee et al. improved the fast corner and brief descriptor and proposed an ORB feature that can effectively replace SIFT and SURF [8]. The ORB algorithm appropriately reduces the accuracy and robustness of feature points, improves the calculation speed, and reduces the calculation time, which is a good compromise between the quality and performance of different types of feature points [9].
Due to the good performance of the ORB characteristics [10,11,12,13,14,15], many scholars at home and abroad have made different improvements to the ORB algorithm [16,17,18,19]. Hong et al. [20] matched the ORB feature point matching algorithm and eight parameters and combined the rotation model, improving the detection speed of the feature point; and Bing et al. [21] improved the rotation of the ORB algorithm in the feature point matching algorithm of the ORB, which enhanced the matching accuracy; but for special scenes, the study of the ORB algorithm is rare under the conditions of poor illumination conditions. For the above questions, in this paper, the template image in the video stream and the image to be input are first processed for grayscale. The second step is to perform adaptive histogram equalization processing on the image to improve the image quality of the input image and the template image. The third step is to extract the feature description through the ORB algorithm. Finally, the KNN matching algorithm is used. By comparing the distance between the optimal feature matching and the suboptimal feature matching, a reasonable ratio is set to eliminate the mismatch. Compared with the traditional ORB matching algorithm, the improved ORB matching algorithm can significantly improve the matching performance under poor lighting conditions. Finally, by comparing the logarithm of correct matching between the image features to be input and all template image features, a reasonable threshold is set, which effectively realizes the target classification of small samples.

2. ORB Algorithm Principle (Orient Fast and Rotated Brief)

Oriented Fast and Rotated Brief (ORB) is based on the famous fast feature detection and brief feature descriptor.

2.1. Feature Point Detection

Image feature points can be simply understood as more important points in the image, such as contour points, high spots in dark areas, and dark spots in bright areas. The ORB algorithm uses a fast algorithm [22,23,24,25] to find feature points. The core idea of fast is to find out those outstanding points, that is, to compare a point with the surrounding points, and if it is different from most of the surrounding points, it can be selected as a feature point [26].
N = x ( c i r c l e ( p ) ) | I ( x ) I ( p ) | > ε d
where I ( x ) is the grayscale of any point on the circumference, I ( p ) is the grayscale of the center, and ε d is the threshold value of the gray value difference. If N is greater than the given threshold value, which is generally three-quarters of the surrounding circle points, P is considered as a feature point.
The specific calculation process of fast is as follows: First, select a pixel point P from Figure 1 and judge whether it can be used as a feature point. It is assumed that its gray value is M. Set a suitable threshold value V (for example, 20% of M). When the absolute value of the difference between the gray values of two points is greater than V, then these two points can be considered to be different. Then, taking the pixel point P as the center, select 16 pixels on a circle with a radius of 3. P can be regarded as a corner point if the gray levels of consecutive L points in these 16 pixels are all larger than M + V or smaller than M − V. Here, L is set to 12 and if at least 12 points exceed the threshold, P is considered as a feature point, Otherwise, it is considered that P is not a feature point. To obtain faster results, additional acceleration measures have also been adopted. If four points around the candidate point are tested at 90-degree intervals, at least three points should have enough difference with the gray value of the candidate point; otherwise, the candidate point should not be considered as a feature point without calculating other points. Figure 1 is a schematic diagram of fast feature point extraction.
Because the fast algorithm cannot detect the direction information of feature points, in the ORB algorithm, the method of establishing a three-layer Gaussian image pyramid is used to increase the scale invariance [27] and the gray centroid method [28] is used to endow feature points with rotation invariance. That is, the coordinate system is established with feature points as the origin, the centroid position is calculated in the neighborhood S , and the vector is constructed with feature points as the starting point and the centroid as the endpoint. The moment of neighborhood S is
M p q = x , y x p y q I ( x , y )
where I ( x , y ) is the gray value of the image, x , y [ r , r ] , r is the radius of the feature point neighborhood, and S is the centroid position of the neighborhood:
C = ( m 10 / m 00 , m 01 / m 10 )
The orientation of feature points of fast is
θ = arctan ( m 01 , m 10 )
To improve the rotation invariance of feature points, it is necessary to ensure that x and y are in the circular area of radius r , that is, x , y [ r , r ] . r is the neighborhood radius.

2.2. Calculate Feature Point Descriptors

ORB uses the improved brief algorithm to calculate the descriptor of a feature point and solves the primary defect that brief itself has no rotation invariance. Its core idea is to select N point pairs in a specific pattern around feature points P and combine the comparison results of these N point pairs as descriptors.
The brief descriptor is simple and fast, which is based on the idea that the image neighborhood can be expressed by a relatively small amount of intensity contrast.
Define the criterion τ of the S × S image neighborhood P:
τ ( p ; x , y ) = { 1 , p ( x ) < p ( y ) 0 , o t h e r s
Here, p ( x ) is the pixel intensity at the point of the x = ( u , v ) T , which in the image is spot P after the filtering process. Choose n d and the ( x , y ) position pair, which uniquely defines the binary criterion. The brief descriptor is a binary bit string of n d dimensions.
f n d ( p ) : = 1 i n 2 i 1 τ ( p ; x , y )
The pixel value can be 128, 256, 512, etc. Select different values to weigh between speed, storage efficiency, and identification.
The criterion of image neighborhood in the brief considers only a single pixel, so it is sensitive to noise. To solve this defect, each test point in ORB uses a 5 × 5 sub-window in the neighborhood of 31 × 31 pixels, in which the selection of the sub-window obeys Gaussian distribution and then uses the integral image to accelerate the calculation.
The brief itself is undirected and has no rotation invariance. The solution of ORB is to try to add a direction to the brief. At position ( x i , y i ) , for any n binary criterion feature set, defining a 2 n matrix:
S = ( x 1 , x 2 , , x n y 1 , y 2 , , y n )
Using the neighborhood direction θ and the corresponding rotation matrix R θ , build a corrected version of S , S θ : S θ = R θ S . So the Steed Brief descriptor is
g n ( p , θ ) : = f n d ( p ) | ( x i , y i ) S θ
After getting Steered brief, use greedy search [29], finding 256-pixel block pairs with the lowest correlation from all possible pixel block pairs, that is, obtaining the final brief. Figure 2 is a schematic diagram of the descriptor calculation.

2.3. The Flowchart of Basic ORB Algorithm

According to the principle of the ORB algorithm, we get the flow chart shown in Figure 3 below:

3. Feature Extraction and Matching Based on an Improved ORB Algorithm

The traditional ORB algorithm has a poor matching effect on underexposed or overexposed images caused by illumination and has a certain sensitivity to image noise. Before matching, the template images and the images to be matched in the video stream are grayscale processed to filter out the noise. On this basis, the template image and the input image are processed by adaptive histogram equalization, which not only increases the number of feature points of the input image and the template image but also increases the number of correctly matched feature points.

3.1. Histogram Equalization

When the gray value distribution of the image is too concentrated, histogram equalization [30,31,32,33] can make the gray probability distribution of the image uniform and make the histogram of the image as stable as possible. Transform as follows:
s = T ( r )
The relationship between the transform function T ( r ) and the original graph probability density function is:
s = T ( r ) = 0 r p r ( r ) d r
The discrete forms are as follows:
s k = T ( r k ) = i = 0 k n j N = i = 0 k p r ( r j ) ( 0 r 1 , k = 0 , 1 , 2 , L 1 )
After many transformations, the equalized image is obtained.

3.2. Adaptive Histogram Equalization

Due to shooting or environmental problems, there may be uneven image brightness, low contrast, high noise, and so on. The features after matching with the original image are concentrated in the areas with high contrast, while the features extracted from other areas are relatively few, so the obtained feature points cannot describe the whole image. The histogram distribution of the original image is shown in the upper-left picture and the bottom-left picture in Figure 4. Although ORB feature matching has a good matching effect under good illumination, the matching effect will be greatly reduced under insufficient illumination or overexposure.
Adaptive histogram equalization (AHE) is a computer image processing technology used to improve image contrast. Different from the ordinary histogram equalization algorithm, the AHE algorithm changes the image contrast by calculating the local histogram of the image and then redistributing the brightness. Therefore, the algorithm can improve the local contrast of the image and obtain more image details.
After adaptive histogram equalization, the details of the dark part of the original image become clearer and the candidate points with a higher Harris response value change during feature point detection, so high-quality matching point pairs in other areas of the same image can be obtained. After adaptive histogram equalization, the histogram distribution is shown in the upper-right picture and the bottom-right picture in Figure 4.

3.3. KNN Matching Algorithm to Eliminate Mismatching

The core idea of the K-nearest-neighbor (KNN) algorithm [34,35,36,37,38,39,40,41] is to search the top K feature points with the highest similarity in another feature space as candidate features.
In this paper, the K = 2 KNN algorithm is adopted to find the ratio R between the distances of the nearest neighbor and the second-nearest neighbor:
R = D ( V p , V m q ) D ( V p , V n q )
where V p is the feature vector of the feature point p, V m q is the feature vector of the nearest neighbor q in an image, V n q is the feature vector of the second-nearest-neighbor feature point in an image, and D is the distance between the vectors.
For each feature point, the optimal feature matching and sub-optimal feature matching are obtained and the distance between them is recorded as m and n, respectively. Each feature point matching pair is screened according to the ratio T (0.7 in this paper), and matching with the excessive ratio is considered as mismatching.
Based on the above, this paper puts forward the idea of combining the image enhancement technology with the ORB algorithm and uses adaptive histogram equalization to process the input images in advance. The experimental results show that the ORB algorithm combined with the image enhancement technology improves feature extraction and matching. The specific process is shown in Figure 5 below.

4. Experimental Results and Analysis

Under natural conditions, illumination has a great influence on collected images and different illumination conditions may cause underexposure or overexposure to affect the quality of images, which brings inconvenience to subsequent matching research. This paper re-distributes the brightness of the image by adopting the adaptive histogram equalization, reducing the influence of the light on the input picture, and the advantages and disadvantages are measured by the number of feature points, the number of matching points, and the running time.
To verify the feasibility of the ORB algorithm, this paper compares the traditional ORB algorithm with the improved ORB algorithm, and the experimental environment is Pycharm 2021 and OpenCV 4.5.2. The recognition target map is a common book in the laboratory, and this paper will verify the two groups of images. The first group of pictures is an overexposed image experimental scene with a size of 523 × 481 in PNG format (Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10). The second group is an underexposed picture experiment scene with a size of 526 × 489 in PNG format (Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15). The experimental results of the ORB algorithm and the ORB algorithm based on image enhancement are compared respectively.
A comparison of the data of Table 1 and Table 2 shows that the improved ORB algorithm not only maintains the superiority of the ORB itself but also improves the matching effect under conditions of insufficient exposure or excessive exposure, where under excessive exposure conditions, the characteristic points increased by 78% and the matching point increased by 45%, and under the conditions of insufficient exposure, the characteristic points increased by 23% and the matching point increased by 34%. The effectiveness of the algorithm was explained by the above experiments and comparison.

5. Conclusions

Due to problems such as shooting or environment, there may be uneven image brightness, low contrast, and noise. The features after matching with the original image are concentrated in the areas with higher contrast, while the features extracted from other areas are relatively few. The resulting feature point cannot describe the entire image. Although the matching effect is good in the case of light, the matching effect is greatly reduced when the light is insufficient or the exposure is excessive, and even a false match may occur. This paper proposes an improved ORB feature extraction algorithm by combining image enhancement techniques and ORB algorithms. Adaptive histogram filtering algorithms, by calculating the local histogram of the input image and then re-distributing the brightness to change the input image contrast, improve the local contrast of the input image and obtain more image details. The results show that the improved ORB algorithm maintains the superiority of ORB itself and significantly improves the matching effect under the conditions of underexposure or overexposure.

Author Contributions

Conceptualization, Y.X. and Q.W.; methodology, Y.X. and Q.W.; validation, Y.X.; formal analysis, Y.X. and Q.W.; investigation, Y.C. and X.Z.; resources, Y.X.; writing—original draft preparation, Q.W.; writing—review and editing, Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Beijing Natural Science Foundation (grant nos. 4192023 and 4202024).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, Z.H. Application of machine vision technology in the field of industrial control. China Comput. Commun. 2018, 17, 87–88. [Google Scholar]
  2. Li, Z. Application in machine vision technology and its automation in mechanical manufacturing. Sci. Technol. Innov. Inf. 2018, 25, 171–172. [Google Scholar]
  3. Wang, F. Development of machine vision technology and its industrial applications. Electron. Technol. Softw. Eng. 2018, 16, 246. [Google Scholar]
  4. Lowe, D.G. Distinctive image features from scale-invariant key points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  5. Bay, H.; Tuytelaars, T.; Gool, L.V. Surf: Speeded up robust features. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; Volume Part I. [Google Scholar]
  6. Rosten, E.; Tom, D. Machine learning for high-speed corner detection. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  7. Calonder, M. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  8. Yang, B.K.; Cheng, S.Y.; Zheng, Y. Improved ORB feature matching algorithm. Transducer Microsyst. Technol. 2020, 39, 141–144. [Google Scholar] [CrossRef]
  9. Yang, H.F.; Li, H. Image feature points extraction and matching method based on improved ORB algorithm. J. Graph. 2020, 41, 548–555. [Google Scholar] [CrossRef]
  10. Yao, J.; Zhang, P.; Wang, Y.; Luo, Z.; Ren, X. An adaptive uniform distribution ORB based on improved quadtree. IEEE Access 2019, 7, 143471–143478. [Google Scholar] [CrossRef]
  11. Shao, C.; Zhang, C.; Fang, Z.; Yang, G. A deep learning-based semantic filter for ransac-based fundamental matrix calculation and the ORB-slam system. IEEE Access 2020, 8, 3212–3223. [Google Scholar] [CrossRef]
  12. Wang, X.; Zou, J.B.; Shi, D.S. An Improved ORB Image Feature Matching Algorithm Based on SURF. In Proceedings of the 2018 3rd International Conference on Robotics and Automation Engineering (ICRAE), Guangzhou, China, 17–19 November 2018; pp. 218–222. [Google Scholar] [CrossRef]
  13. Wang, Z.; Li, Z.; Cheng, L.; Yan, G. An improved ORB feature extraction and matching algorithm based on affine transformation. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 1511–1515. [Google Scholar] [CrossRef]
  14. Zhao, Y.; Xiong, Z.; Duan, S.; Zhou, S.; Cui, Y. Improved ORB based image registration acceleration algorithm in visual-inertial navigation system. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 5714–5718. [Google Scholar] [CrossRef]
  15. Sun, H.; Wang, P.; Zhang, D.; Ni, C.; Zhang, H. An improved ORB algorithm based on optimized feature point extraction. In Proceedings of the 2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 20–22 November 2020; pp. 389–394. [Google Scholar]
  16. Zhang, L. Image matching algorithm based on ORB and k-means clustering. In Proceedings of the 2020 5th International Conference on Information Science, Computer Technology and Transportation (ISCTT), Shenyang, China, 13–15 November 2020; pp. 460–464. [Google Scholar] [CrossRef]
  17. Feng, Y.; Li, S. Research on an image mosaic algorithm based on improved ORB feature combined with surf. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 4809–4814. [Google Scholar] [CrossRef]
  18. Yao, H.F.; Guo, B.L. An ORB-based feature matching algorithm. Electron. Des. Eng. 2019, 27, 175–179. [Google Scholar] [CrossRef]
  19. Dai, X.M.; Lang, L.; Chen, M.Y. Research of image feature point matching based on improved ORB algorithm. J. Electron. Meas. Instrum. 2016, 30, 233–240. [Google Scholar] [CrossRef]
  20. Li, X.H.; Xie, C.M.; Jia, Y.H. Rapid moving object detection algorithm based on ORB features. J. Electron. Meas. Instrum. 2013, 27, 455–460. [Google Scholar] [CrossRef]
  21. Bai, X.B. Improved feature points matching algorithm based on speed-up robust feature and oriented fast and rotated brief. J. Comput. Appl. 2016, 36, 1923–1926. [Google Scholar] [CrossRef]
  22. Yan, P.; An, R. Improved fast corner detection algorithm based on fast. Infrared Laser Eng. 2009, 38, 1104–1108. [Google Scholar] [CrossRef]
  23. Zhou, L.L.; Jiang, F. Image matching algorithm based on fast and brief. Comput. Eng. Des. 2015, 5, 1269–1273. [Google Scholar] [CrossRef]
  24. Ding, Y.L.; Wang, J.D.; Qiu, Y.J. Fast feature detection algorithm based on self-adaptive threshold selection. Command. Control Simul. 2013, 35, 53–59. [Google Scholar] [CrossRef]
  25. Rosten, E.; Tom, D. Fusing points and lines for high performance tracking. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; Volume 2. [Google Scholar]
  26. Chen, S.C.; Liu, J.H.; He, L.Y. Improved brisk algorithm for image splicing. Chin. J. Liq. Cryst. Disp. 2016, 31, 324–330. [Google Scholar] [CrossRef]
  27. Pu, X.C.; Tan, S.F.; Zhang, Y. Research on the navigation of mobile robots based on the improved fast algorithm. CAAI Trans. Intell. Syst. 2014, 9, 419–424. [Google Scholar]
  28. Fan, X.N.; Gu, Y.F.; Ni, J.J. Application of improved ORB algorithm in image matching. Comput. Mod. 2019, 282, 1–6. [Google Scholar] [CrossRef]
  29. Wang, S.; Wang, H.Y.L.; Wang, X.F. An improved mcmc particle filter based on greedy algorithm for video object tracking. In Proceedings of the 2011 IEEE 13th International Conference on Communication Technology, Jinan, China, 25–28 September 2011. [Google Scholar]
  30. Yelmanov, S.; Olena, H.; Yuriy, R. A new approach to the implementation of histogram equalization in image processing. In Proceedings of the 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT), Lviv, Ukraine, 2–6 July 2019. [Google Scholar]
  31. Gangolli, S.H.; Arnold, J.L.F.; Reena, S. Image enhancement using various histogram equalization techniques. In Proceedings of the 2019 Global Conference for Advancement in Technology (GCAT), Bangaluru, India, 18–20 October 2019. [Google Scholar] [CrossRef]
  32. Tan, S.F.; Nor, A.M.I. Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images. IEEE Access 2019, 7, 70842–70861. [Google Scholar] [CrossRef]
  33. Dubey, V.; Rahul, K. Adaptive histogram equalization based approach for sar image enhancement: A comparative analysis. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021. [Google Scholar]
  34. Wang, J.; Yu, M.; Ren, H.Z. An improved ORB algorithm for image stitching. Chin. J. Liq. Cryst. Disp. 2018, 33, 520–527. [Google Scholar] [CrossRef]
  35. Chen, L.; Li, M.; Su, W.; Wu, M.; Hirota, K.; Pedrycz, W. Adaptive feature selection-based AdaBoost-KNN with direct optimization for dynamic emotion recognition in human–robot interaction. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 205–213. [Google Scholar] [CrossRef]
  36. Tu, B.; Wang, J.; Kang, X.; Zhang, G.; Ou, X.; Guo, L. KNN-Based representation of super pixels for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4032–4047. [Google Scholar] [CrossRef]
  37. Ab Wahab, M.N.; Nazir, A.; Ren, A.T.; Noor, M.H.; Akbar, M.F.; Mohamed, A.S. Efficientnet-lite and hybrid CNN-KNN implementation for facial expression recognition on raspberry pi. IEEE Access 2021, 9, 134065–134080. [Google Scholar] [CrossRef]
  38. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient KNN classification with different numbers of nearest neighbors. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1774–1785. [Google Scholar] [CrossRef] [PubMed]
  39. Su, J.; Wang, M.; Wu, Z.; Chen, Q. Fast plant leaf recognition using improved multiscale triangle representation and KNN for optimization. IEEE Access 2020, 8, 208753–208766. [Google Scholar] [CrossRef]
  40. Liu, L.; Su, J.; Liu, X.; Chen, R.; Huang, K.; Deng, R.H.; Wang, X. Toward highly secure yet efficient KNN classification scheme on outsourced cloud data. IEEE Internet Things J. 2019, 6, 9841–9852. [Google Scholar] [CrossRef]
  41. Li, C.; Liu, M.; Cai, J.; Yu, Y.; Wang, H. Topic detection and tracking based on windowed dbscan and parallel KNN. IEEE Access 2021, 9, 3858–3870. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the extraction of fast feature points.
Figure 1. Schematic diagram of the extraction of fast feature points.
Applsci 12 00786 g001
Figure 2. Schematic diagram of the descriptor calculation.
Figure 2. Schematic diagram of the descriptor calculation.
Applsci 12 00786 g002
Figure 3. ORB feature extraction and the matching flowchart.
Figure 3. ORB feature extraction and the matching flowchart.
Applsci 12 00786 g003
Figure 4. In the upper left is a histogram of overexposed images; in the upper right is a histogram of an overexposed image after adaptive histogram equalization; in the bottom left is a histogram of an underexposed image; in the bottom right is a histogram of an underexposed image after adaptive histogram equalization.
Figure 4. In the upper left is a histogram of overexposed images; in the upper right is a histogram of an overexposed image after adaptive histogram equalization; in the bottom left is a histogram of an underexposed image; in the bottom right is a histogram of an underexposed image after adaptive histogram equalization.
Applsci 12 00786 g004
Figure 5. Traditional ORB feature matching (left) and improved ORB feature matching (right).
Figure 5. Traditional ORB feature matching (left) and improved ORB feature matching (right).
Applsci 12 00786 g005
Figure 6. Raw images of the two targets under overexposure conditions.
Figure 6. Raw images of the two targets under overexposure conditions.
Applsci 12 00786 g006
Figure 7. Feature points extracted by the traditional ORB algorithm under overexposure conditions.
Figure 7. Feature points extracted by the traditional ORB algorithm under overexposure conditions.
Applsci 12 00786 g007
Figure 8. Feature matching image extracted by the traditional ORB algorithm under overexposure conditions.
Figure 8. Feature matching image extracted by the traditional ORB algorithm under overexposure conditions.
Applsci 12 00786 g008
Figure 9. Feature points extracted by the improved ORB algorithm under overexposure conditions.
Figure 9. Feature points extracted by the improved ORB algorithm under overexposure conditions.
Applsci 12 00786 g009
Figure 10. The improved ORB algorithm is used to extract feature matching images under overexposure conditions.
Figure 10. The improved ORB algorithm is used to extract feature matching images under overexposure conditions.
Applsci 12 00786 g010
Figure 11. Raw images of the two targets under underexposed conditions.
Figure 11. Raw images of the two targets under underexposed conditions.
Applsci 12 00786 g011
Figure 12. Feature points extracted by the traditional ORB algorithm under underexposed conditions.
Figure 12. Feature points extracted by the traditional ORB algorithm under underexposed conditions.
Applsci 12 00786 g012
Figure 13. Feature matching image extracted by the traditional ORB algorithm under underexposed conditions.
Figure 13. Feature matching image extracted by the traditional ORB algorithm under underexposed conditions.
Applsci 12 00786 g013
Figure 14. Feature points extracted by the improved ORB algorithm under underexposure conditions.
Figure 14. Feature points extracted by the improved ORB algorithm under underexposure conditions.
Applsci 12 00786 g014
Figure 15. The improved ORB algorithm is used to extract feature matching images under underexposure conditions.
Figure 15. The improved ORB algorithm is used to extract feature matching images under underexposure conditions.
Applsci 12 00786 g015
Table 1. Matching data of overexposed images.
Table 1. Matching data of overexposed images.
MethodFeature Points of the Left GraphFeature Points of the Right GraphMatching PointsRunning Time (ms)
Traditional ORB algorithm30673112734304
SIFT algorithm722650297109
Improved ORB algorithm546855191066372
Table 2. Matching data of underexposed images.
Table 2. Matching data of underexposed images.
MethodFeature Points of the Left GraphFeature Points of the Right GraphMatching PointsRunning Time (ms)
Traditional ORB algorithm24752315210273
SIFT algorithm37634616587
Improved ORB algorithm31302866282295
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, Y.; Wang, Q.; Chang, Y.; Zhang, X. Fast Target Recognition Based on Improved ORB Feature. Appl. Sci. 2022, 12, 786. https://doi.org/10.3390/app12020786

AMA Style

Xie Y, Wang Q, Chang Y, Zhang X. Fast Target Recognition Based on Improved ORB Feature. Applied Sciences. 2022; 12(2):786. https://doi.org/10.3390/app12020786

Chicago/Turabian Style

Xie, Yinggang, Quan Wang, Yuanxiong Chang, and Xueyuan Zhang. 2022. "Fast Target Recognition Based on Improved ORB Feature" Applied Sciences 12, no. 2: 786. https://doi.org/10.3390/app12020786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop