Application of Computer Vision Technology in Industrial Automation

With the continuous development of artificial intelligence and Internet of Things technology, the manufacturing industry is transforming and upgrading from automation to intelligence. Countries continue to increase support and investment in intelligent manufacturing from the policy and financial aspects. This paper mainly studies the application of computer vision technology in the field of industrial automation. In this paper, the sparse representation algorithm is applied to the category detection of artifacts with arbitrary orientation. By establishing the Gabor feature dictionary of the workpiece and solving the sparse reconstruction solution according to the feature vector of the detected workpiece image, the category label of the workpiece is obtained. Aiming at the reconstruction of 3D point cloud of workpiece, this paper uses the method of feature point matching to realize the reconstruction of 3D point cloud. Finally, the workpiece detection method studied in this paper is implemented on the system and the performance of the algorithm is verified.


Introduction
Computer vision detection technology based on computer vision theory, through the use of image processing technology to replace the human eyes to identify and detect objects, is a non-contact detection technology. The principle of the technology is to segment the target image, extract the target features for analysis, and then compare the analysis results with the standard requirements in many aspects, so as to judge whether the measured target conforms to the standard. With the development of industrial production, human-driven manufacturing technology has been slowly eliminated, the precision and efficiency of production and manufacturing requirements are more and more high, computer vision detection technology has begun to use in the assembly line production, to automatic detection of product quality. This technology has many advantages, among which the most prominent ones are non-contact, fast speed and high precision. It has been widely used in automated production and has become one of the basic technologies of intelligent manufacturing [1]. The data to be processed by the computer vision detection technology is the collected image. According to the different pretreatment methods, the image types can be divided into three kinds: color image, gray image and binary image. The composition of binary image is relatively simple, composed of black and white two colors, less retained information than other types, more suitable for industrial production of product quality detection, with high precision, fast characteristics. The amount of information retained by grayscale image is between true color image and binary image, and it is difficult to be applied to the situation with complex surface features and poor illumination conditions. Therefore, grayscale image is seldom used to detect defects of electronic components, and is more suitable for detection of defects on the surface of objects, such as cracks and breakages [2]. Color image can be used to analyze the color space component or shadow of the object and judge whether there are defects according to the color of the object. With the continuous progress of computer technology, the demand for computer vision detection technology will be more and more in the future, so the technology has a broad application prospect in the field of industrial intelligent manufacturing.
Russia, Germany, the United States and other developed countries on the workpiece identification technology research earlier. Since the beginning of the twentieth century, many mature techniques have been developed based on statistical theory of matching. Used for workpiece automatic recognition is the medium of industrial robots, with the industrial robot technology unceasingly mature and perfect, will be introduced to the computer vision robot, to add industrial robot "eyes", improve the intelligence of industrial robots, many scientific research institutes at home and abroad have a wealth of research in this field, and achieved many achievements [3]. The Japanese famous robot manufacturing company research laboratory first through the use of a camera on the target object in the space of multiple positions for image acquisition, the object in different space positions of the information, and then the collected information after processing to achieve the target object recognition.
In this paper, the problems of workpiece detection in industrial production are explored. According to different experimental modules, the detection algorithms of part categories in 2D and 3D scenes are explored and realized.

Basic Concepts of Computer Vision
In the modern society of rapid economic development, whether it is aerospace industry, or the product manufacturing industry on the production line, high yield and high efficiency is an important goal pursued by enterprises. To replace the classical mechanical operation with advanced technology and concept, to reduce manpower and to advocate the precise work of machinery is the current productivity level advocated by the society. Among them, the development of computer vision technology promotes the efficiency of industrial automation and the improvement of product quality. Returning to the computer vision itself, it is a huge system, which contains different organizational structures, covering the content of multiple disciplines, intertwined, profound. The design of computer vision will use the knowledge of different disciplines, such as electronic science, neurobiology and so The birth of computer vision from the human eye vision has been unable to meet the needs of some industries, the need to use more accurate, more rapid system design to complete the manufacturing and testing of products. Human eyes are an important channel or way to perceive and observe everything in the world. Perception is mainly to intuitively feel the color, shape and brightness of the target object, while judgment is to feed back the image signal of the object seen by human eyes to the brain for reading and processing the information data. In some large factories, one machine need all-weather endless work, because once closed restart will loss a lot of energy and funds cost, but human eyes under the long time operation will produce fatigue, not like the machine work, so the application of computer vision to the human eye has been unable to complete in the industry, the appeal is becoming more and more strong [4].
Computer vision can read and process the objective object information through the computer system, and can monitor and judge it at the same time. It is widely used in the measurement of precision workpiece size. By this technology is in order to achieve the level of industrial automation of this feature, we can understand that it has the advantages of high efficiency, fast, storage data, wide application fields and so on. Computer vision aims to replace human eyes, so its structure is similar to that of human eyes [5]. The corresponding computer vision includes optical device, image sensor device, image acquisition and processing system, computer processing and so on. The selection of light source in optical system is the key factor to obtain high quality image, and the selection of appropriate light source is very important to the accuracy of computer vision system recognition. At present, there are two kinds of light sources, transmitted light and reflected light, which are used in computer vision technology. These two kinds of light sources can maintain sufficient brightness in system recognition. After the light source irradiation to the measured object, the image sensor on the image, it is enough to see that the camera occupies a large position in the process of image recognition, and with the continuous progress of CCD(charge coupled device), its process products are more and more compact, and its accuracy is constantly improving. After imaging on the photosensitive surface of CCD, the computer vision system needs to analyze and process the acquired image information, that is, the software processing process, and the premise of software analysis is enough perfect hardware infrastructure.
Computer vision will be the picture of the image sensor to receive the information to the image processing system, data transmission and feature extraction of image information, the target object and background separation, clear testing subject target attributes, determine its size, or shape, appearance is obtained by processing system of the measured values with the experimental setting the standard of comparison to analyze the accuracy of the data information, spread out the subsequent processing related calculation as soon as possible to eliminate the potential problems [6].

Image Preprocessing
Because the image will be affected by the environment and electrical signal interference in the process of acquisition and transmission, in order to make the image effect more ideal, the median filtering algorithm is first needed to de-noise the image. The principle of median filtering algorithm is as follows: firstly, a pixel is randomly defined as the neighborhood of the center point, also known as the window (generally square and circle), and the window is moved around, and then the gray value of the pixel is sorted according to the gray value of the pixel, and the median value is taken as the gray value of the pixel of the center point. Median filtering can remove the coarse line noise, binary noise, impulse noise and particle noise in the image, and at the same time attenuating the random noise, it can make the boundary no longer fuzzy, and meet the detection requirements [7]. Mediams refer to the intermediate values arranged in order of the size of the signal values. The median value of the one-dimensional signal is expressed by Equation (1) as follows:

(1) Median Filtering Algorithm
The two-dimensional image signal of the median filtering algorithm is defined as:

Image Binarization
Image binarization refers to the method in which images are represented by two gray values (0 and 255) by determining appropriate thresholds [8]. System adopts automatic calculation of Otsu threshold algorithm, the principle is the first assume that an image from the foreground and background colors B in two parts, and then use statistical methods to select the appropriate threshold, make the threshold of foreground and background colors can be separated in the largest extent, which is the optimal state under a certain criterion, the criterion in the middle finger of Otsu algorithm is the most betweencluster Variance (intra-class diversities Variance), the said method is shown in the following type:

Feature Extraction (1) Gabor Characteristics
Gabor feature is one of the description methods of image texture features. Its good scale selection and direction selection feature enable it to filter most of the influences brought by illumination changes [9]. At the same time, it is very sensitive to the edge information of image, which is also an important reason why it can become a common method of texture feature extraction in the field of visual detection.
In essence, Gabor transform is a process of image convolution, which can extract features in different scales and directions in the image frequency domain. At the same time, it can highlight the key features of the image to a certain extent, and can be considered to bring the effect of image

(2) Feature Extraction
For feature extraction of the workpiece image, filtering and spatial convolution of the filtered and segmented workpiece image are carried out according to the established filter group to realize its Gabor transform [10]. In this paper, the transformed workpiece image is divided into blocks to obtain the energy operation, in order to reduce the feature dimension and reduce the subsequent calculation burden. In the experiment of this paper, the workpiece image is divided into 256 small blocks, and the energy value of each small block is calculated, and its energy value is used to replace the original small block data into the matrix. In this way, we reduce the dimension of the feature.
Finally, the obtained energy matrix is spanned into a 256×1 dimensional vector, which is stored in a matrix D as the eigenvector.

3D Point Cloud Reconstruction
At present, the application of robots in the production line is becoming more and more important. The reconstruction of the point cloud of the workpiece can provide more comprehensive spatial information of the workpiece, so as to enable the manipulator to operate more accurately.
Point cloud reconstruction based on binocular vision is mainly divided into two categories: one is the feature-matching method used in this paper, and the other is the parallax based point cloud reconstruction. In order to compare the effects of the algorithms, this paper also explores the method of 3D point cloud reconstruction based on binocular parallax. Due to the small size of the workpiece and the lack of background reference for the production line, it is not possible to obtain a wellorganized parallax map. There is a relationship between the gray value in the parallax diagram and the distance from the camera. Compared with the standard parallax diagram, it can be seen that the gray value distribution in each part of the diagram is very uneven. Therefore, for the workpiece in this paper, it is not feasible to solve the 3D point cloud based on the method of parallax.

Workpiece Detection Algorithm
The category detection of the whole workpiece in this paper corresponds to the application of sparse representation algorithm. The whole testing process is as follows: Firstly, the image of the parts is obtained by using the external triggered camera, and the image information is transmitted to the image processing program of the upper computer. In view of the acquired image, the first thing to do is to preprocess the image. Firstly, median filtering is carried out, and then image segmentation is performed on the filtered image. The target to be detected is extracted from the background to remove the influence of the background on the subsequent detection. Here, OTSU algorithm is used to extract the foreground of the image.
After filtering out the background, Gabor transformation is carried out on the image to extract the Gabor features. The feature vectors are input into the part category detection algorithm based on According to the above determination of the category, the instructions sent to the lower computer microcontroller, so as to control the corresponding execution unit for the corresponding operation, to achieve the sorting operation of the workpiece.
In the process of system implementation of the whole algorithm, many experiments were carried out on the workpiece category detection algorithm based on sparse representation to verify its accuracy and real-time performance. In order to verify the algorithm, multi-group workpiece detection was carried out. When verifying the algorithm, all the workpiece types in the library were selected, each number was randomly selected, and each number was detected for a group of fifty times, and the speed of the conveyor belt was set as 40 cm per second according to the system.

3D Point Cloud Reconstruction
The establishment process of point cloud is as follows: collect two images of two images according to binocular phase unit and perform simple preprocessing on the images, filter out the noise and extract the image foreground target.
The feature extraction operation is carried out on the preprocessed images. The SURF algorithm is used to extract the feature points in the two images respectively, and then the extracted feature points are matched. The spatial transformation matrix between the feature points is estimated by using the paired feature point pairs. The camera's spatial coordinate system corresponding to the first image is selected as the world coordinate system. Then, according to the calculated spatial transformation matrix, the feature points are restored to the spatial coordinate system to generate the 3D point cloud image of the image.

Serial number
Transmission band speed

Figure1. Category detection results
As shown in Figure 1 and Table 1, the time data in the table is the total time from image acquisition to detection completion and instruction transmission to the lower computer. It can be seen from the data that the test time of the three groups of test items is limited to 0.5s, which meets the real-time requirement of the test. The accuracy of detection is 98%, 100% and 96% respectively, which also proves the accuracy of the algorithm. Point cloud reconstruction involves more computation, so it takes more time compared with 2D detection, but it can obtain more information of the workpiece.

Conclusion
With the continuous development of science and technology, the continuous progress of society, the gradual realization of automation and intelligence in industrial production, coupled with the continuous update of artificial intelligence algorithms in recent years, as well as the emergence of more intelligent products, the application of visual detection technology is more and more widely. In this paper, based on the research of the artifact detection algorithm, the experiment of the workpiece categories detection algorithm, the proposed algorithm is compared with some traditional methods, has a better robustness and can meet the demand of production line detection, and based on the workpiece surface by 3 d point clouds reconstruction methods of research, has realized the point clouds reconstruction based on feature points matching, obtain the spatial information of the workpiece, can for the study of the workpiece to operate more intelligent robots provide decision-making information.