Application of Machine Vision Recognition System in Mobile Robot

In order to solve the problem of autonomous recognition of hexapod robot and realize the intelligent and humanized development of robot, OpenMV is taken as the main platform, hexapod robot is taken as the main machine carrier, Python is taken as the main development language, C language is taken as the auxiliary development language, and the reasonable application of image processing technology is added. A simple visual recognition system based on OpenMV is designed to realize the application of visual recognition.


Introduction
With the popularity of robots, related technologies are also developing, among which the visual information processing technology is one of the key technologies in the research of mobile robots. At present, the content of visual information processing mainly includes compression and filtering of visual information, road detection and obstacle detection, recognition of specific traffic road signs, 3D information perception and processing [1][2][3][4]. The acquisition of visual information is the basis of robot planning and navigation, and also the basis of determining the robot to correctly identify the current road environment information. Effective visual information can be obtained to plan the route correctly and lay a foundation for normal walking [5]. Visual information filtering is a basic visual processing process, which can provide basic information feedback for other visual functions, so as to realize corresponding functions [6].
Therefore, putting the visual recognition system into the research of hexapod robot provides a new way of thinking to realize the intelligent and humanized robot. Strengthening the application of visual identification system can enable robots to make autonomous decisions under the circumstance of changeable environmental factors, so that robots can better serve the society and human, promote the development of The Times and promote the progress of science and technology.

Basic theory of visual recognition
To put it simply, visual recognition is to process and analyze the images acquired by the camera. A camera is actually a device that converts optical signals into electrical signals [7]. The principle of RGB primary colors is not due to physical reasons, but due to physiological reasons. The eye has several color cone-shaped photoreceptors, which are most sensitive to yellow-green, green and blue-purple (or violet) Photosensitive elements are composed of many photosensitive points. For example, there are 320×240 points, and each point is a pixel. All the pixel points are integrated together to form a 320×240 image, as shown in Figure 2. The frame rate (FPS) is the number of images processed per second. If the frame rate is more than 20 frames, the human eye can hardly see the lag. Lab Color Space is a color pattern developed by the International Council on Lighting (CIE).In Lab color space, L represents brightness; A and B represent two color channels, respectively. L ranges from 0 to 100, while A and B range from +127 to -128.The positive end of a is red and the negative end of a is green. The positive end of b is yellow, and the negative end is blue. Therefore, the L component can adjust the brightness and modify the output order of the A and B components to achieve accurate color balance. Physically, colors are electromagnetic waves of different wavelengths, but people use RGB, Lab and other color models for electromagnetic waves to describe various colors that can be recognized by human eyes. Figure 3 is the spectrum of visible light. With the basic knowledge of these images, the images can be identified and analyzed.

Color recognition
Color recognition is the most basic function of OpenMV recognition [8], and the core of color recognition is the selection of color threshold. Different colors have different color thresholds, and OpenMV realizes the recognition of the same color by judging the color threshold. The so-called color threshold is to first set a color, and the image conversion is the process of comparing pixels [9]. The process of image analysis is to analyze and calculate the obtained pixel points and the preset threshold value, so as to determine whether the color is set or not. Therefore, the higher the color threshold, the smaller the number of colors. For example, to find the red threshold, the following operations can be carried out. The red threshold is shown in Figure 4    3.2. template matching NCC (normalized cross corre • lation) algorithm is used in template matching, which can only match the pattern whose size and angle are basically consistent with the template image. However, the limitation of this algorithm is relatively large, for example, the target pattern in the field of vision is slightly larger or smaller than the template image, and the matching may not be successful.
Template matching is applicable to the situation where the distance between the camera and the target object is determined and no dynamic movement is required [8].For example, it is suitable for the detection of specific objects on the assembly line, but not for the car to track a moving volleyball (because the distance between the moving volleyball and the camera is dynamic, the camera will see the size of the volleyball will not be exactly the same as the template picture).For multi-angle and multisize matching, you can try to save multiple templates and use multi-template matching.
It is a common image processing method to compare the similarity of two images based on NCC algorithm. The value range is [-1, 1], and the value images of each pixel are inconsistent. If a subset of it matches the data of another sample, its NCC value is 1, indicating high correlation; if -1, it means no correlation at all. NCC is a matching algorithm based on similarity measurement, which has been widely adopted in the industrial field.
The calculation formula of NCC algorithm is as follows: (1) Where, I is the target image;T is the template image, and the size of the template is M × N. According to the NCC algorithm, we can compare the characteristics of the template and the image to find out whether the template matches.

Feature point detection
In practical application, the application of image recognition with a fixed distance to a single image is relatively simple, while the application of dynamic recognition is more common. According to this situation, the concept of feature point detection is introduced. Feature point detection is no longer limited to the recognition of objects of a specific distance and size, which makes it possible to accurately identify dynamic objects. The proportion of objects and images is compared by algorithm to realize the function of recognition. Fast algorithm is widely used in feature point detection. This design is to use this algorithm to achieve feature point detection.
FAST, namely Features from Accelerated Segment Test, Only by comparing a few pixels can you determine if the point is a key. The fast algorithm comes from the definition of each point, forming a circle around a point, and judging whether this point is critical according to the pixel value of a circle. If there is an arc whose continuous length is more than 3/4 of the circumference, and all the pixels above it have intensity values that are significantly different from those at the center of the circle (all are darker or brighter), then this is considered a critical point. The algorithm can detect key points very quickly, so it is suitable for applications where speed is a priority. These applications include real-time visual tracking, target recognition, etc.

Visual recognition experiment
For the visual recognition function of the experimental method, first test the integrity of the program, check whether the program is wrong, and then run the program. Taking the red object as an example, this paper first finds the red threshold value and fills it in the program. Then, it places the object not far from the camera and runs the program. You can see that the computer terminal will display the image of the camera. Figure 5 is the position picture of the camera and the object, reflecting the viewing Angle from the third person perspective. Figure 6 is the recognition picture of the OpenMV camera. The detected red objects are marked with a rectangular box to reflect the viewing Angle of the positive perspective.  Through the analysis of the experimental conditions set in Fig. 5 and Fig. 6, the hexapod robot designed in this paper has good accuracy in object recognition.

Vision ranging experiment
To test the visual ranging function, the correctness of the program is also checked first. Taking the above red object as an example, the author first detects the distance of the object, puts the object somewhere with a known distance, and runs the program. At this time, the computer will mark the red object and output the distance at this time, as shown in Figure 7, and then observe the position of the physical image, as shown in Figure 8. Then move the red object several times, and compare the difference between the measured data and the actual data.

Target tracking experiment
Compared with other functions, the target tracking experiment needs to be completed by STM32 single chip microcomputer. Other functions are based on this function. The data is returned through image analysis, and then sent to STM32 single chip microcomputer through serial port. The single chip microcomputer then controls the steering gear board to achieve target tracking. Similarly, the red object is used for target tracking test, the code is run, the red object is placed in front of the camera, and a certain critical value is set. When the distance is greater than the maximum critical value, the robot will move towards the object. When the distance is less than the minimum critical value, the robot will move backward. When the camera field of vision can not find the set object, the robot will move towards the object, The robot will turn around in place to find the marked object. The specific effect is shown in Figures 9 to 10. Test the stability and accuracy of the detection function several times. As can be seen from Fig. 9` and Fig. 10, the target tracking effect of the designed robot is very obvious with the help of the design of hardware board and the function of software mode.

Conclusion
This paper takes the hexapod robot as the research object, describes the development of a simple visual recognition system on OpenMV platform, and builds the system on the hexapod robot for application, realizing the visual recognition and target tracking of the hexapod robot. Through the research and application of robot recognition, it is of vital practical significance for the real walking of hexapod robot. But at present, how to adapt to the multi-environment, multi-angle target tracking, is still the next step needs to study the target.