Visual high-precision detection method for tool damage based on visual feature migration and cutting edge reconstruction

Aiming at the problem that the current tool damage detection system is difficult to automatically identify the location of the tool damage and accurately measure the amount of tool damage from the collected machine tool damage images, a visual high-precision detection method for tool damage based on visual feature migration and cutting edge reconstruction is proposed. This method divides the tool damage into wear area and breakage area, and extracts tool wear image information and breakage image information based on visual feature migration and cutting edge reconstruction respectively. Furthermore, image fusion is used to combine the wear and breakage features to obtain the damaged area. Finally, the tool damage location recognition is realized, and the geometric characteristics of tool damage are accurately measured based on the identified damage information. An offline detection experiment platform was built to verify the effectiveness of the proposed method. Experiments show that the proposed tool damage visual detection method using visual feature migration and cutting edge reconstruction can solve the current difficulty in automatically identifying the tool damage location and accurately measuring the amount of tool damage from the tool damage image and has good environmental adaptability and stability. Compared with the existing local variance method, adaptive threshold method and other methods, the average accuracy of tool damage geometric feature measurement is improved by at least 20%, which has a great advantage.


Introduction
As a key part in the machining process, cutting tools are prone to damage under the common erosion of cutting force and cutting heat [1]. Most operators estimate whether the tool is damaged or not and whether the degree of damage affects the machining accuracy by experience [2]. This judgment method is easy to cause the tool to be replaced before its service life, leading to tool waste; or the tool to be replaced after excessive wear, resulting in product scrapping, machine failure, and even casualties [3]. In order to reduce the cost of manufacturing, reduce the harm of the manufacturing environment, and ensure the normal and efficient operation of the manufacturing system and product quality, it is necessary to monitor the tool damage state with high efficiency and precision [4,5].
At present, many scholars have studied the tool condition monitoring technology. According to the different monitoring methods, tool condition monitoring methods are divided into direct measurement and indirect measurement. Indirect detection is based on the characteristics of cutting force [6][7][8], acoustic emission [9,10], current [11,12],vibration [13], temperature [14] and other signals related to tool damage. However, there are some shortcomings in the monitoring method based on sensor signal: Firstly, it takes a lot of human resources and requires comprehensive professional knowledge to extract features related to tool damage from the signal; Secondly, the prediction model based on multi-sensor signal is a certain mapping under the specific machine type and processing parameters. When the NC program is modified or a different tool is used to cut, this model is no longer applicable.
The direct detection method is to obtain the tool damage information by machine vision. There are three kinds of tool damage monitoring methods based on machine vision. One is based on the tool surface image, the other is based on the machined workpiece surface texture, and the third is based on the chip shape. Dutta et al. used image texture analysis technology to monitor the condition of high-speed steel milling cutter and cemented carbide coated milling insert [15]. Datta et al. analyzed the surface texture of machined workpiece with the gray level co-occurrence to monitor tool condition [16]. Xu et al. used the surface texture features of a machined workpiece to identify and diagnose the tool wear state [17]. Zhang et al. proposed a theory that analyzed the morphology change of cutting chips to monitor tool condition in the ultra-precision milling [18]. C.L. Pu studied the effect of dynamic recrystallization of tool chip interface on tool wear in high-speed cutting AISI1045 steel [19]. However, monitoring tool damage based on workpiece texture and chip shape only proves its feasibility in theory [20]. Due to the influence of cutting parameters on workpiece surface texture and the randomness of chip generation and the complexity of deformation, it is difficult to use this method in visual monitoring [21]. The monitoring method based on the tool surface image can directly extract the tool damage area and calculating the geometric characteristics of tool damage, which has a good application prospect. But the key is to segment the damage area from the tool image through image processing. Some scholars use Otsu, mean value iterative method, and morphological operation analysis to process the tool image globally binary, and segment the tool wear area and non-wear area [22][23][24]. However, due to the complexity of the tool surface texture, the damage area usually contains a lot of background information, it is easy to misjudge the undamaged area as the damage area, and the detection accuracy is difficult to guarantee. Peng et al. [25] used the region growing method to segment the image. However, due to the certain fluctuation of the gray value of the damaged area of the tool, this method cannot accurately extract the damaged area. Dai y et al. [26] enhanced the feature of tool damage area by image exponential transformation and extracted the tool damage area by specifying segmentation threshold manually. However, this method is inefficient and needs many attempts to determine the optimal segmentation threshold. Qin et al. [27] proposed a local variance algorithm for boundary extraction, which can clearly segment the tool wear area from the image. However, this method is sensitive to the region with large gray fluctuation, and the damage region extracted usually contains other irrelevant boundary information, resulting in large detection error. Zhang et al. [28] found the wear boundary pixels through column search, and reconstructed the upper boundary of tool wear to obtain the maximum wear width. However, this column search method can only find the maximum wear column to extract the maximum wear width, and cannot extract the complete damage area of the tool for multi-index evaluation. Fernandez et al. [29] delimit the wear area, and extract the wear area based on morphological operation and K-means algorithm. However, this method needs a certain amount of data set for learning and training, which has low detection efficiency and poor applicability. In addition, the above damage area extraction method only considers the wear area, and does not consider the tool breakage area extraction, so the detection results have some errors. Some scholars calculate the difference between the new tool and the old tool to determine whether the tool is damaged based on the shape matching method [30,31]. However, this method needs to collect new tool images, and ensure that the acquisition environment of new tool images and damaged tool images is consistent, which requires a higher acquisition system. At the same time, this method can only detect tool breakage, not tool wear. María et al. [32] analyzes the texture characteristics of the damaged area of the tool. Then, the damaged tools are divided into repairable and damaged based on expert knowledge, and the tool damage image data set is established. Furthermore, SVM is used to realize the automatic classification of the tool damage state. The accuracy can reach 90.26%. However, this method is a qualitative analysis and research on tool damage, which can identify the damage state of the tool It is urgent to carry out research on quantitative analysis of tool damage to achieve accurate measurement of tool damage from machine tool images.
Therefore, aiming at the problem that the current tool damage detection system is difficult to automatically identify the location of the tool damage and accurately measure the amount of tool damage from the collected machine tool damage images, this paper takes the tool surface image as the research object and proposes a visual high-precision detection method for tool damage based on visual feature migration and cutting edge reconstruction after the comprehensive analysis of its gray value rule. In this method, the tool damage area is divided into wear area and breakage area, and the two areas are extracted respectively. The geometric characteristics of the sum of the two areas are used as the criteria to judge the tool damage degree. Compared with the results of optical microscopy, the results show that the method has high accuracy.

Tool damage image collection system
Machine vision measurement is based on image gray information. In order to ensure that the tool damage characteristics identified from the image have high reliability, it is necessary to ensure that the collected tool end face and the camera lens end face are completely level. Therefore, this paper uses the tool damage detection system shown in Fig. 1 to carry out the research on the automatic tool damage recognition method. The system mainly includes a CCD camera, telecentric lens, ring light, and camera bracket. The vertical rod of the camera bracket is vertically fixed on the surface of the water platform, the beam and the vertical rod are installed vertically, the CCD camera is fixed on the bracket perpendicular to the beam through the circular hole on the beam, the telecentric lens is connected with the CCD camera, and the ring light source is set Provide good lighting conditions at the front of the lens. During inspection, after cleaning the tool, stand it vertically on the horizontal inspection table, and then adjust the position of the tool to be tested and the distance between the front end of the lens and the tool, so that a clear image of the tool damage appears in the camera's field of view. Finally, a CCD camera is used to capture the image of the flank face of the tool and transmit the image to the computer for subsequent processing via Gigabit Ethernet. The tool damage detection system parameters are shown in Table 1. Figure 2 shows the image of a single cutting edge of a milling cutter collected by the tool damage detection system. In order to make the image clear and have obvious features, the gray scale image is used to enhance the image, and Fig. 3 is the gray scale image. At the same time, in order to reduce the interference of environmental noise, the 3 × 3 filtering algorithm is used to filter and reduce the noise of the grayed tool image, while maintaining the detail information of the image, the impact of noise is minimized. The image after noise reduction is shown in Fig. 4. According to the analysis of the tool image after noise reduction and filtering, the tool damage image is composed of four parts: background area, wear area, breakage area, and undamaged area, as shown in Fig. 5 The wear area is a wear area formed by sharp friction between the tool and the workpiece. The breakage area is the micro chipping and missing area of the edge and tip of the tool, which is integrated with the background. In order to analyze the distribution of gray value in each region of the tool damage image, the upper left corner of the filtered tool image is taken as the coordinate origin, the horizontal right axis is the X-axis, and the vertical down axis is Y-axis to establish the image coordinate system, as shown in Fig. 6. Then, the coordinates of any pixel point in the tool image can be expressed as (x i , y i ), and the corresponding pixel gray value can be expressed asf(xi , y i ). In the image coordinate system, three vertical marking lines are made along the Y-axis, which pass through the background area, wear area, and undamaged area at the same time. With the length of the marker line as the abscissa and the gray value at the marker line as the ordinate, the gray distribution curves of the pixels at the three marker lines are drawn respectively, as shown in Fig. 7. It can be seen from the gray distribution curve that the gray value of tool damage image has the following distribution law:

Image analysis
(1) The gray values of the pixels in the wear area, the background area, and the undamaged area are quite different, and the gray values of the pixels in the background area < the gray values of the pixels in the undamaged area < the gray values of the pixels in the wear area. The breakage  area is integrated with the background area, and the pixel gray value of the breakage area is equal to that of the background area.
(2) Although the gray values of adjacent pixels in each region fluctuate slightly, the overall distribution is relatively stable. The gray value of the pixel near the edge of the wear area appears jumping change. Based on the above analysis, the tool damage image can be represented by Formula (1).
where (x i , y i ) represents the coordinates of any pixel in the tool image. f(x i , y i ) represents the gray value of any pixel in the tool image. g(x i , y i ) ∈ [0, L 1 ] is the pixel gray value of the background region. h(x i , y i ) ∈ [L 2 , L 3 ] is the pixel gray value of the undamaged region. k(x i , y i ) ∈ [L 4 , 255] is the pixel gray value of the wear area. p(x i , y i )is the pixel gray value of the breakage area. Because the breakage area is integrated with the background area,

Visual feature extraction and measurement of tool damage
Based on the analysis of tool image, a visual detection method of tool damage based on visual feature migration and cutting edge reconstruction is proposed as shown in Fig. 8. Firstly, the tool image is rotated and positioned, and then the tool damage area is divided into wear area and breakage area. Then, based on the gray value distribution of the image, the two regions are extracted respectively, and the geometric features of the sum of the two regions are used as the criterion to judge the tool damage state.

Tool rotation positioning correction
For the tool damage image collected in the field processing, the cutting edge of the tool has a certain angle with the horizontal direction of the image, as shown in Fig. 9. If the tool is not positioned and corrected, the measurement difficulty will be increased in the subsequent damage measurement process. In order to facilitate the subsequent measurement of tool damage, it is necessary to rotate the tool image for positioning correction. Firstly, the inclination angle θ between the cutting  Fig. 10, select the undamaged cutting edge of the tool. Then, after binarization and Canny edge detection of the selected area, the edge line is constructed by Hough line detection [33]. Many lines are obtained by Hough line detection, and Fig. 11d shows the Hough line detection image. In order to ensure the best rotary positioning effect, average the multiple angles detected by Hough line, and then calculate the inclination angle θ between the cutting edge of the tool and the image in the horizontal direction according to Formula (2). Finally, the cutting edge is rotated horizontally along the inclination angle. After that, the tool image is clipped and the tool damage area is reserved, as shown in Fig. 12, so far the tool image positioning is completed. Figure 11b and c show the binary image of cutting edge and Canny edge detection image respectively.
where θ is the inclination angle of the cutting edge, θ t is the angle of Hough line detection, and C is the number of angles. Figure 13 shows the tool damage diagram after rotation correction. In the following, the wear area and breakage area will be extracted based on the gray value distribution of the image.

Wear area extraction
The wear area has the characteristics of a small area, indistinct features, large fluctuation of gray value, complex texture, and difficult extraction. In this paper, a new method based on visual feature migration is proposed for wear area extraction. Figure 14 shows the process of wear area extraction based on visual feature migration, including three steps: target fusion, target migration, and image difference. The target fusion is to fuse the tool wear area and the tool undamaged area in the tool damage image into one area, while the background is a single area. That is to say, set the gray value of the pixels in the undamaged area and the wear area of the tool to 255, display as white, and set the gray value of the pixels in the background area to 0, display as black. Because the size ratio of the background area and the undamaged area is similar, the wear area is different from the background area and the undamaged area, and Otsu is not sensitive to the image with   (4)- (11). Figure 15 shows the tool image after target fusion. Target migration refers to moving the tool wear area in the tool damage image to the background area and merging it with the background into one area, while the tool undamaged area is a single area. However, from the analysis of 3.2, it can be seen that: the pixel gray value of the background area < the pixel gray value of the undamaged area < the pixel gray value of the wear area, and the difference between the pixel gray value of the wear area and the pixel gray value of the background area is large, so it is difficult to directly migrate the wear area to the background area and preprocessing is needed to make the wear area image close to the pixel value of the background area. In order to realize this step, the gray value of the background pixel of the tool image can be set to 255 and displayed as white. The gray value of the pixels in the undamaged area and the wear area of the tool is set to 0 and displayed as black. The tool wear image with white background is obtained, as shown in Fig. 16. Then, the tool image after rotation positioning correction is multiplied by the weight value a, so that the gray value of pixels in the tool wear area is close to 255 and displayed as white, as shown in Fig. 17. Finally, the pixel level fusion of Figs. 16 and 17 is carried out, and the image with only tool undamaged area is preliminarily obtained, as shown in Fig. 18. After preprocessing, the tool image after pixel level fusion is binarized by Otsu again. The undamaged area of the tool is set to 255, which is displayed as white. The pixels in the wear area and background area are set to 0 and displayed in black, as shown in Fig. 19. So far, the target migration is completed. The determination of the weight coefficient ξ can scan the filtered tool image pixels, obtain the maximum pixel value and record it asλ, and then the weight value can be determined according to Formula (3).
The image difference is to subtract the tool image after target migration from the tool image after target fusion and extract the tool wear area. However, due to the complex texture of the wear area, the pixel gray value fluctuates greatly. After extraction, the wear area has cavities, and the narrow edge has fractures and discontinuities. In order to make the extracted wear area complete, morphological closing operation is used to smooth the wear area, connect adjacent elements, fill in cavities, and narrow fracture edges. Finally, the tool wear area is accurately extracted from the tool image, as shown in Fig. 20 where λ is the maximum pixel gray value in the tool image, ξ is the weight coefficient. The image binary operation principle of Otsu is as follows: If the number of pixels with gray value i in the image is n i , and the gray value range of pixels in the image is [0, M-1], then the total number of pixels in the image is as shown in Formula (4).
where N is the total number of pixels in the image, n i is the number of pixels with gray value i in the image, M is the gray value,M ∈ [0, 256]. The probability of each gray value in the image is shown in Formula (5).
where P i is the probability of gray value n i in the image, n i is the number of pixels with gray value i in the image, N is the total number of pixels in the image. The image pixel is divided into two regions with threshold T: A 1 and A 2 . A 1 is composed of pixels with a gray value of [0,T-1], and A 2 is composed of pixels with a gray value of [T,M-1]. Then, the probability of the gray value of region A 1 and region A 2 is shown in Formula (6) and Formula (7), respectively.
where P 1 is the probability of the gray value of region A 1 , P 2 is the probability of the gray value of region A 2 , P i is the probability of the gray value n i in the image, n i is the number of pixels with the gray value i in the image, N is the total number of pixels in the image, T is the threshold, M is the gray value, M ∈ [0, 256]. The average gray values of region A 1 and region A 2 are shown in Formula (8) and Formula (9), respectively.
where μ 1 is the average gray values of region A 1 , μ 2 is the average gray values of region A 2 , P 1 is the probability of the gray value of region A 1 , P 2 is the probability of the gray value

Background area
Wear area Undamaged area The average gray value μ of the whole image is shown in Formula (10).
where μis the average gray value of the whole image, μ 1 is the average gray values of region A 1 , μ 2 is the average gray values of region A 2 , P 1 is the probability of the gray value of region A 1 , P 2 is the probability of the gray value of region A 2 , P i is the probability of the gray value n i in the image, T is the threshold, M is the gray value, M ∈ [0, 256].
The total variance σ 2 of the two regions is shown in Formula (11).
where σ 2 is The total variance of the two regions, μ 1 is the average gray values of region A 1 , μ 2 is the average gray values of region A 2 , P 1 is the probability of the gray value of region A 1 , P 2 is the probability of the gray value of region A 2 , Let T take values in the range of [0, M-1], and T with the largest σ 2 value is the optimal region segmentation threshold.

Breakage area extraction
It is difficult to extract the breakage area from the tool image directly because the tool breakage area and the background area are integrated. In this paper, a method of extracting breakage area based on cutting edge reconstruction is proposed. Its flow chart is shown in Fig. 21, which mainly includes three steps: data acquisition of cutting edge, tool reconstruction, and image difference.
The data acquisition of cutting edge refers to the binary operation of the tool image after the rotary positioning correction by using the Otsu binary principle shown in Formulas (4)- (11), so as to set the pixel gray value of the undamaged area and the wear area of the tool to 255, and the pixel gray value of the background area to 0. Then, Canny edge detection is applied to the binary image to extract the contour edge of the damaged tool. Next, the undamaged upper cutting edge and lower cutting edge in the tool image are respectively selected, as shown in Figs. 24 (16) and (17). Then, based on the fitting cutting edge, the pixel of the tool area in the image is set to 255 and displayed as white. So far, the tool reconstruction based on the damaged tool image is completed and the complete tool image after cutting edge reconstruction is obtained, as shown in Fig. 28.
Image difference is to subtract the damaged binary tool image from the complete tool image after cutting edge reconstruction. Figure 29 shows the tool breakage area extracted by the method of cutting edge reconstruction.
The cutting edge straight line fitting based on the collected undamaged cutting edge pixel coordinate data is shown in Formulas (16)- (19).
where αis the linear slope of the fitted cutting edge, β is the intercept of the fitted cutting edge straight line, (x q , y q )is the pixel coordinates of the cutting edge, x is the average abscissa value of the undamaged cutting edge pixels, and y is the average ordinate value of the undamaged cutting edge pixels. The coordinate of the tool tip is shown in Formulas (16)- (17).
where x 1 andx 2 are the average abscissa values of the upper and lower cutting edge pixels without damage respectively.y 1 andy 2 are the average ordinate values of the upper and lower cutting edge pixels without damage respectively.α 1 andα 2 are the straight slopes of the fitted upper and lower cutting edges respectively.x 0 andy 0 are the abscissa and ordinate values of the tool tip respectively. After the extraction of the wear area and breakage area, add the two areas to get a complete tool damage area, as shown in Fig. 30.

Reconstructed upper cutting edge
Reconstructed lower cutting edge

Damage geometry measurement
In the process of machining, the tool will produce the damage of the rake face, flank, tip, and boundary, as shown in Fig. 31. Because the influence of flank damage on machining accuracy and machining stability is more significant than rake face damage, tip damage, and boundary damage, in actual production, tool damage assessment indicators are usually established based on the tool flank damage. Figure 32 is a schematic diagram of the flank damage of the cutter. The maximum damage width is represented by VB Max , the average damage width is represented by VB, the damage length is represented by NB, and the damage area is represented by A VB . The ISO3685 standard stipulates that the average damage width VB of the flank of the tool can be used as an evaluation index of the tool damage state [34]. Therefore, the tool damage state can be judged by extracting the damage zone of the tool flank and calculating its average damage width VB. In addition, in order to more realistically and completely assess the damage status of the tool, this paper also uses the maximum damage width VB Max , the damage length NB, and the damage area A VB as the tool damage assessment indicators.

Pixel equivalent calibration
Based on the image processing method, the tool damage is measured in pixels, which needs to be converted into the actual size to obtain the real tool damage. In this paper, the micro calibration board shown in Fig. 33 is used as a reference standard for pixel equivalent calibration. The pixel equivalent is calibrated according to Formulas (18) and (19).
where K 1 is the length pixel equivalent; K 2 is the area pixel equivalent; l N is the actual length of the micro calibration board; a N is the actual area of the micro calibration board; N 1 is the number of pixels used to represent the length of the micro calibration board in the image; N 2 is the number of pixels used to represent the area of the micro calibration board in the image.

Tool damage calculation
As shown in Fig. 34, according to the extracted tool damage area, the minimum external rectangle of the tool damage area is constructed. As the tool is adjusted by rotation and positioning, the length of the minimum external rectangle in the horizontal direction is the length of the tool damage L, and the length of the minimum external rectangle in the vertical direction is the maximum width of the tool damage W. L and W can be calculated according to Formulas (20) and (21). The
where K 1 is the length pixel equivalent; K 2 is the area pixel equivalent; L is the actual length of the tool damage zone; N L is the number of pixels used to represent the minimum horizontal length of the external rectangle in the image; W is the actual maximum width of the tool damage zone; N W is the number of pixels used to represent the length of the minimum external rectangle in the vertical direction in the image; W is the average width of the tool damage zone; S is the actual area of the tool damage zone; N S is the total number of pixels in the extracted tool damage zone.

Experimental design
In order to verify the effectiveness and superiority of the tool damage detection method based on visual feature migration and cutting edge reconstruction proposed in this paper, 12 end mills with different degrees of damage were collected from the  industrial site, and the true damage of the damaged tool was measured using a microscope, as shown in Fig. 35. Compare the results detected by the method in this article with the values measured by the microscope. And use the same experimental image samples at the same time, and compare the method proposed in this article with the following methods: (1) OTSU method [22] (2) Mean iteration method [23] (3) Adaptive threshold method [35] (4) Canny edge detection method [36] (5) Region growing method [25] (6) Local gray level variance threshold algorithm [27] , and 40 and Tables 2, 3, 4, and 5 that the proposed tool damage visual detection method using visual feature migration and cutting edge reconstruction can indeed automatically and effectively identify the complete tool damage area. The 12 tools were tested and the effect was outstanding. This method solves the problem that the current tool damage monitoring system is difficult to automatically identify the location of the tool damage from the tool damage image and accurately measure the amount of tool damage. Figure 41 is a comparison diagram of the effects of different visual inspection methods. Table 6 shows the comparison of the quantitative analysis results of 12 tools detected by each method. From the comparison results in Fig. 41 and Table 6, it can be seen that the Otsu method [22], the mean iteration method [23] and the region growing method [25] failed to identify the damage information of the 12 tools, and the detection effect was the worst. The adaptive threshold method [35], Canny edge detection method [36], and local variance method [27] have an average accuracy of 60-80% for damage detection of 12 tools. The proposed tool damage visual      adaptive threshold method, the average accuracy of tool damage geometric feature measurement is increased by at least 20%, which has a great advantage. Figure 42 shows the tool damage images collected under different light intensities, and Fig. 43 shows the tool damage characteristics identified using the method in this paper under different light intensities. It can be seen from the figure that under different light conditions, the methods proposed in this article can accurately identify the damaged area of the tool and maintain good results, indicating that the method proposed in this article has good environmental adaptability and stability.

Conclusion
Tool damage detection is of great significance to ensure machining quality, improve production efficiency, and reduce harm to the manufacturing environment. Aiming at the problem that the current tool damage detection system is difficult to automatically identify the location of the tool damage and accurately measure the amount of tool damage from the collected machine tool damage images, a visual high-precision detection method for tool damage based on visual feature migration and cutting edge reconstruction is proposed. In this method, the tool damage is divided into wear area and breakage area. Based on the visual feature migration and cutting edge reconstruction, the tool wear image information and the tool damage image information are extracted respectively, and then the tool damage area is effectively extracted by image fusion. Finally, the tool damage location recognition is realized, and the geometric characteristics of tool damage are accurately measured based on the identified damage information. The average accuracy of tool damage length, maximum damage width, average damage width, and damage area can reach 97.93%, 97.53%, 97.10%, and 96.47%, respectively, which is better than other existing visual inspection methods. The research results in this paper are of great significance for ensuring the product quality and the normal and efficient operation of the manufacturing system and can provide effective technical support for tool wear condition monitoring in the future. The main findings of this article can be summarized as follows: (1) The gray value distribution law of tool damage image is revealed: the gray value of the wear area, background area, and damage area has great differences, and the pixel gray value of the background area is < the pixel gray value of the undamaged area < the pixel gray value of the wear area. The gray values of adjacent pixels in each region fluctuate slightly, but the overall distribution is relatively stable, and the gray values of pixels near the edge of the wear area show jump changes. There are two transition edges in the wear zone. At the upper transition edge, the gray value increases from top to bottom, and at the lower transition edge, the gray value decreases from top to bottom; (2) A tool damage detection method based on image region division is proposed. Considering the characteristics of tool damage, the tool damage area is divided into wear area and breakage area. Two regions are taken as targets and extracted from the tool image. The geometric feature of the sum of the two regions is used as the criterion to judge the degree of tool damage, which avoids the shortcomings of the current visual inspection methods of tool damage that only considers tool wear, but does not consider the tool breakage and missing features, and effectively improves the accuracy of tool damage detection. (3) A tool wear feature extraction method based on visual feature transfer is proposed. The tool wear area is transferred to the non-worn area and the background area, respectively, so that the wear area is in a different target environment. Then, the tool wear area is effectively segmented and extracted by image difference, which solves the difficult problem of extracting tool wear area due to complex texture, large fluctuation of gray level, small area, and unclear features. Moreover, the proposed method can be extended to other small defect detection occasions and other small target extraction fields. (4) A tool breakage feature extraction method based on cutting edge reconstruction is proposed. By collecting the data point coordinates of the upper and lower cutting edges without damage of cutting tools, the upper and lower cutting edge boundaries of the cutting tools are fitted, and the tool breakage area is reconstructed. Then, the binary damaged tool image is subtracted from the reconstructed tool image by using image difference technology, and finally, the tool breakage area is extracted. It solves the problem that it is difficult to extract the tool breakage information from the image at present. It does not need to collect the new tool image, avoids the tedious detection process of template matching, and realizes the visual high-precision and efficient detection of tool breakage.