Abstract

Image processing and feature information extraction based on the visual perception of the weld pool are considered essential components of intelligent welding quality monitoring of hull structure gas metal arc welding (GMAW). The unstable characteristics, such as large spatter, much smoke, and strong arc light during hull structure GMAW, lead to the blurring of image acquisition and the difficulty of contour extraction of the weld pool. The present study is aimed at addressing the practical issues from two perspectives, i.e., a spectrum-visual-sensing acquisition system and an image-processing and feature extraction algorithm. First of all, by analyzing the light energy distribution law and acquiring the optical parameters relevant to the cut-off composite dimming and near-infrared narrowband filtering, spectral sensing is employed in establishing models of arc light radiation to detect the strength of continuously distinctive spectral lines. Besides, an appropriate high-speed charge-coupled device (CCD) camera is selected to build a visual acquisition system, which can reduce the external interference of the arc light on the image acquisition of the weld pool. Afterwards, the implementation of an image-processing fusion model based on the spatial information fuzzy C-means (FCM) clustering analysis and Sobel edge detection operator accompanies the investigation of the geometric aspects of the weld pool image. In terms of clear segmentation of the interest region, the edge detection and accurate extraction of the target contour are successfully obtained. In the subsequent section, the Hough transform analysis is adopted to establish the geometric feature extraction model of the weld pool, with corner detection, conversion, and camera calibration as the core technology. Additionally, the left and right views of the image contour are calibrated to achieve the lossless conversion of corner pixels and physical coordinates. Finally, three other distinct image-processing methods are designed to compare the segmentation effect of the edge contour with the fusion model, and then, the extraction accuracy of the geometric features of the weld pool is verified. The interference of the arc light and smoke has been demonstrated to be substantially diminished, which is attributable to the visual-sensing system during image acquisition of the weld pool. The results of edge fusion of the weld pool image show that based on the GMAW using the FCM-Sobel fusion method, a superior extraction accuracy of geometric features characterized by smoothness, continuousness, no breakpoints, and less noise has fulfilled the engineering requirements.

1. Introduction

Welding man hours contribute to approximately 30%–40% of the overall hull structure man hours, while welding cost accounts for roughly 30%–50% of the entire hull structure cost [1]. Apparently, the superior welding technology is crucial to the shipbuilding process. At present, manual and semiautomatic welding processes are still the mainstream in the shipbuilding and repair industry of China. Welding quality is inconsistent and primarily focuses on the welder’s level of previous experience working on on-site structures. Some welding environments are relatively harsh, leading to increased labour intensity of welders, which leads to decreased production efficiency and lack of market competitiveness of the products [2], making it imperative to promote a highly flexible automatic welding production system and advanced intelligent welding robot technology [3]. With the development of large-scale and complex hull structures, the relatively hazardous hull repair structures and other harsh welding environments should be taken into account. If only constant automatic welding specification process parameters are utilized, it will be difficult to ensure the stable welding quality in the on-site structure due to random factors, such as workpiece size, assembly clearance, and operating proficiency. Henceforth, welding process monitoring is considered the most important technical means of ensuring shipbuilding quality.

Research into the acquisition of process-sensing information is an essential aspect of intelligent monitoring of welding on the hull structure. Identifying the penetration state and welding formation quality is the objective of welding process information sensing. However, welding is a complicated operation that can easily create powerful arc light, smoke, high heat, and other forms of severe interference, which can make it challenging to reliably gather data of the welding process. Consequently, it has always been one of the bottlenecks in the research of welding dynamic process quality monitoring. Due to recent advancements in the machine vision technology, visual sensing has been utilized to simulate the visual behaviour of welders, yielding both intuitive and incredibly rich data. Owing to the advantages it provides in the real-time monitoring of the weld pool and viewing the position of the welding wire, the acquisition and processing technology of welding visual image information has become one of the fundamental methods for accomplishing welding quality monitoring. This has inspired many researchers to study the acquisition of welding visual image information, such as the acquisition of the carbon dioxide (CO2) short-circuit transfer weld pool image [4, 5], acquisition of the process information for neodymium-doped yttrium aluminium garnet (Nd:YAG) solid-state laser deep-penetration welding of stainless steel materials [6], monitoring of weld pool instability and forming defects that are prone to occur during aluminium alloy tungsten inert gas welding (TIG) [7], and collection of dynamic position information of the weld pool for the GMAW process [8]. A dynamic change monitoring system for the metal active gas (MAG) arc weld pool is constructed using dual cameras during the GMAW weld with a single groove with gap fluctuations [9]. It is apparent that there have been some advancements in the field of machine vision-based weld pool detection applications over the past few years but that China has been moving painfully slow in developing an image-processing knowledge model for this type of detection to be employed on ship structures, notably in the area of GMAW welding pool image data for the hull.

GMAW, one of the most prominent welding methods in use presently, usually uses CO2 as the shielding gas for welding [10]. This method has the advantages of high production efficiency, good welding quality, low cost, and strong practicability, which jointly make it widely used in industrial departments. However, owing to the unique influence of the CO2 gas’s thermophysical properties, it is extremely possible to create numerous spatters, smoke, arc intensity, and other undesirable features during the welding process, posing significant challenges to the visual information gathering of the weld pool. The shape, characteristics, and dynamic changes of the weld pool are important factors to determine the welding forming quality. Some papers define different image-processing techniques for weld pools, such as the research work by Xiao and Bing [11] that proposed the classic processes like Sobel, Canny, and the threshold segmentation for the CO2 weld image processing. Xu et al. [12] investigated the robot GMAW process and integrated multiple classical image-processing algorithms to extract weld visual image edges through restoration, filtering, edge detection, edge scanning, and curve fitting. Zhao et al. [13] proposed an edge extraction algorithm for the weld pool based on a supervised descent method to obtain high-quality welding during laser welding. Despite recent developments in image processing, it is still difficult to generalize filters or algorithms in classic image processing. Especially for the problems of complexity, ambiguity, and image edge uncertainty that defy standard mathematical descriptions in the actual welding processing, there is no universal optimization algorithm.

The fuzzy theory has witnessed tremendous development in recent years, and it is currently extensively employed in a variety of contexts, including pattern recognition, welder image processing, and uncertainty analysis. This technology has achieved good application results in the automatic compilation of the assembly process of ship segmented modules [14]. Chen et al. [15] introduced the fuzzy concept into edge detection to solve the problem of defect edge detection in gray welding X-ray images. Following an analysis of the statistical properties of the gradient values of the gray image, the problem of defect edge detection in welding X-ray gray images is characterized by a membership function. This method serves as the groundwork for identifying the defect’s nature and category. Besides, despite recent developments in identifying and categorizing weld defects, no consideration is given to the cost issue and time consumption. Another application technology is the improvement of the fuzzy edge detection algorithm (Pal algorithm for short). Sun [16] proposed an improved fast fuzzy edge detection algorithm in view of the shortcomings of the Pal algorithm, which has a fast image edge extraction speed and good performance, but the edge continuity is relatively poor. Zhang [17, 18] used the wavelet, morphology, and fuzzy algorithm fusion to suppress various kinds of noise interference in MAG/TIG arc-welding visual images and then extracted the edge information of the weld pool. Although these methods have advantages in time-consuming algorithms, the contour extraction still has some discontinuous edges for the weld pool images with complex, nonlinear, and fuzzy welding processes. There is an excellent segmentation result with the X-ray imaging technology, but it is expensive and requires some time to process once the welding has been completed. The visual weld pool image features necessary for dynamic welding processes are often imprecise because of the prevalence of spatter, smoke, and arc intensity during the welding of hull structures, which, in turn, impede the accuracy of the information and diminish the efficiency of welding quality monitoring. Recent research on the GMAW welding process has already proposed the application of these newest machine-learning techniques, like support vector machines (SVM) [19], convolutional neural network (CNN) [20], deep-learning techniques [21], and fuzzy-clustering analysis [16]. However, as shown in [21], there are still few investigations based on the research developed in the field and the potential of these techniques.

Based on the previous analysis, the purpose of this paper is to explore the use of passive visual image sensing of weld pools for the control of welding operations, specifically regarding the GMAW welding process. The research work focuses on the combination of the FCM-clustering analysis algorithm and the application of the actual welding process. This paper applies the FCM-clustering analysis algorithm with spatial neighborhood information to the processing of GMAW visual weld pool images to overcome the shortcomings of previous approaches to this problem and proceeds to implement a method for the weld pool image contour extraction based on the spatial information fusion Sobel operator. In order to eliminate the problem of relying on threshold selection, being vulnerable to the image arc light and noise when traditional differential operators are used for image segmentation, the theoretical knowledge of the algorithm is hereby modeled through machine learning, so as to improve the processing accuracy of visual-sensing images and then extract the necessary feature parameters of the weld pool morphology. It forges a foundation for further analyzing the dynamic behaviour of the weld pool in the process of hull GMAW welding.

2. System Design of Image Acquisition for Ship GMAW Weld Pool

The achievement of the dynamic information collection and analysis of the welding process quality monitoring is the most crucial stage toward the actualization of ship-welding intelligent technology. To fulfil the aim of the real-time online closed-loop management of welding quality, this article intends to correlate, quantify, and model the visual information characteristics with the weld pool state. However, there is still a need for appropriate equipment to aid in the collection of visual weld pool data that can be used to characterize the welding quality attributes. At present, many researchers have studied the acquisition of dynamic information in the welding process, including the multisource information-monitoring system with the visual image, arc spectrum, and weld pool oscillation as the core. Currently, there are primarily two approaches for collecting the weld pool image information typically utilized in the vision of the welding dynamic process, i.e., the passive light source vision method and the active light source vision method, which vary depending on the light sources available in the experimental environment [22]. The passive vision has always been one of the research hotspots in the field of welding robot sensing technology. For instance, Xu et al. [23] proposed an enhanced Canny method that acquires the visual image of the weld pool using a passive visual-tracking technique. Passive visual sensing requires no additional light sources and provides numerous benefits and ample information. The CCD camera captures the arc radiation in the welding process or the image of the welding area under natural light conditions, limiting the application of certain types of vulnerable equipment, reducing costs, and improving certain work efficiencies. The detecting signal, however, is susceptible to interference owing to the hostile environment of the ship-welding work site, where there are several energy-consuming devices. This study employs passive vision and the same image-sensing technique in robot welding to demonstrate its efficacy on the massive and intricate hull structure.

2.1. Intelligent Experimental System for Multisensor Quality Monitoring

As an example of GMAW welding techniques used on ships with CO2 gas-shielded welding, this paper illustrates the advantages of simple equipment, mature technology, and visual information that mimics the welder’s eyes during the dynamic welding process by using the common GMAW welding method of ships. In order to further clarify the weld pool’s vision and other features, a multi-information fusion monitoring system, mainly composed of a KUKA KR-5 six-axis welding robot body, Pulse MIG-500RP welding machine, welding control cabinet (teaching device), wire feeder, welding workbench (positioner-external axis-two axes), industrial computer, optical and sound spectrum acquisition system, and visual sensor acquisition system, is created and used on a dynamic GMAW welding process for hulls. The structure block diagram of the multisensor quality intelligent inspection experiment system for the specific ship GMAW welding process is shown in Figure 1. With the aid of a CCD camera, the computer takes real-time visual information of the welding process during data acquisition and welding state monitoring. The information is then transmitted to the image acquisition card, where it is converted into digital information and transmitted to the computer. The information can be viewed on the Galaxy-Mainland visual weld pool monitoring software that has been developed by the company. When the welding system monitors and acquires data under the welding state, the computer takes real-time visual information of the welding process through a CCD camera and transmits it to the image acquisition card. The acquisition card converts it into digital information and transmits it to the computer, which is then displayed on the self-developed visual weld pool monitoring Galaxy-Mainland software. Weld pool image information is simultaneously saved to the computer’s local storage for offline analysis.

2.2. Visual-Sensing System

The visual sensor acquisition system is the key part of the whole system platform. It is utilized to acquire visual information throughout the welding process and has a direct influence on the subsequent image-processing effect. In the experiment presented in this work, the vision-sensing system consists of an industrial camera with an external trigger function, a composite dimming and narrowband filter system, and an external trigger system.

The vision sensor and water-cooled welding gun employed in the experiment must be established in such a way that they can work together. They must also be guided together with the welding robot so that their relative positions can be maintained during the welding. During the welding process, the vision sensor is often placed within a few inches of the weld pool, which is thereby allowed to capture images that are highly indicative of the true status of the weld pool. Similarly, the visual sensor should follow the principles of small size and light weight to avoid damaging the welding robot due to excessive load bearing [24]. A professional composite narrowband light-filtering system should be added to filter and remove the arc light, which mitigates the impact of the arc light on CCD image acquisition. Besides, antispatter components are added to prevent spatter factors from damaging some critical components of the CCD. Figure 2 is a schematic depicting the internal workings of the visual sense system.

An image acquisition card, an industrial camera based on a CCD chip, and a lens jointly constitute the general components of a CO2 weld pool detection system using passive vision. The purpose of an image acquisition card is to convert the image signal captured by the CCD camera into a digital signal that can be displayed, stored, and transmitted by a computer. The industrial camera used in this study is the MER-231-41GC GigE industrial digital camera from the Beijing Daheng Company, which is an extremely dependable and reasonably priced industrial digital camera product, featuring advantages that include a high resolution and definition as well as a small size and durability. During welding, the CCD is mounted on the welding gun at an angle of roughly 45° to the surface of the test panel. Its represented frame rate and exposure time are set to 14 frame/s and 40 ms, respectively, and the relative position between the CCD and the welding gun remains stable. The camera lens is a 2/3-inch megapixel focal length 8 mm manual aperture compact industrial lens of the M0814-MP2 model. The main performance parameters of the CCD camera and lens are shown in Table 1.

2.3. Composite Filter System

The basic idea of the multiangle visual sensor imaging is to illuminate the weld pool area in the spectral line window of the image-taking position by using the arc and the continuous spectrum near the weld pool [25]. An appropriate proportion of the light intensity of each radiation source in the weld zone is detected through the composite filtering technology, which is achieved by directly observing the weld pool information, such as the geometric shape and size of the weld pool, the deviation between the weld pool center and the weld, and the arc position. However, noises like welding spatter and arc light may be incorporated into the image acquired by the CCD camera and transmitted. Due to the noises, direct extraction of the image’s geometric characteristics will result in large discrepancies from the real values. Weld pool images, which are crucial to the viability of multisource sensing vision detection, require high-quality CCD images obtained prior to image feature extraction as well as a reasonable and effective preprocessing process, which is attributed to the essential role of minimizing interference caused by the arc light and improving signal-to-noise ratios in these images [26].

It has been discovered that a broad variety of narrowband filters can be used at the front end of industrial CCD cameras for the filtering and dimming purposes to overcome this issue. The determination of the filter in the GMAW process for low-carbon steel is a significant challenge in welding vision sensing. In this field, Wang et al. [27] studied the visual inspection of an argon-rich shielded weld pool using a 1064 nm near-infrared narrowband filter. However, there is a downside: this band is virtually the maximum wavelength that a CCD can detect. Thus, the camera sensitivity is limited, and the image quality degrades. Wang et al. [28] measured the spectral characteristics of the GMAW arc in the range of 200~1100 nm. It sets the exposure time of the CCD camera to take images in the wave band with few characteristic lines of the arc light and weak continuous spectrum. The simulation results reveal that the imaging ramifications of the weld pool are related to the exposure duration or aperture size of the camera. Gu and Li [29] investigated the various characteristics of the GMAW arc spectrum information under different parameters. In view of this, Liang et al. [30] made a qualitative analysis of the factors affecting the weld pool imaging, such as weld pool radiation, arc radiation, and camera sensitivity, and provided a favorable theoretical underpinning for designing a reasonable filter system to extract weld pool information. According to this, Sun [31] analyzed the influence of the arc light on image quality in the CO2 welding process. According to the reflected arc light of the weld pool and the thermal radiation intensity of the weld pool itself, a 580 nm filter and a 0.1% attenuation sheet were designed as the system to eliminate the arc light. Zhao et al. [32] decided on a combination of an 850 nm high-pass filter and 660 nm band-pass filter to capture the weld pool image of stainless steel GMAW plate according to the self-radiation spectrum of the weld pool and the quantum efficiency of the camera. In summary, the imaging effect of the weld pool is influenced by many factors, such as the thermal radiation of the weld pool itself, the reflected light of the arc, and the photosensitive characteristics of the camera. The complexity of welding and the variability of the arc discharge emission spectrum make it necessary to take appropriate light-filtering measures in the experiment. To this end, an arc spectrum acquisition system is hereby constructed to obtain the spectral line characteristic information relationship between the arc spectrum and the photosensitive characteristics of the CCD during the welding process, which provides technical support for the composite light-filtering technology.

In this experiment, the arc spectrum acquisition system mainly consists of a Sony CCD detector, FX2000 fiber spectrometer, and Morpho analysis software, with specific parameters shown in Table 2. It also combines the analysis of the camera spectral sensitivity and objective lens spectral transparency of the visual-sensing system to provide the optimal wavelength band for the composite filter system, as shown in Figure 3.

The arc spectrum acquisition system integrates a high-quality Richardson blazed grating with high optical resolution, low stray light coefficient, and wide spectral range, as well as an ultraviolet-sensitive CCD. Real-time acquisition, processing, and accurate analysis of GMAW arc light spectrum data can be realized by installing it on the welding torch. The collected spectral information is shown in Figure 3(a). As can be seen from the sensor-acquired information, the arc radiation spectrum consists of continuous spectra and discrete characteristic spectral lines, with multiple spectral peaks appearing in the 400–620 nm band. The arc continuous spectrum intensity weakens, and the characteristic spectral lines decrease in the red band (620–700 nm). In this case, the central wavelength of 660 nm is chosen as the optimal filtering wavelength. Besides, it should be noted that spectral sensitivity and objective lens spectral transparency matter considerably in the visual-sensing system as essential physical properties of camera imaging. Sensitivity represents the ability of the CCD to realize the conversion between sensor response values and color stimulus values, while lens transparency refers to the ability of light to pass through the lens, i.e., the percentage of transmitted light flux and its incident light flux. Due to the oscillation of the weld pool during the GMAW welding process, the long exposure time of the camera and the adjustment of the aperture may cause image blurring, making it necessary to figure out the spectral band that can maximize the spectral sensitivity (as shown in Figure 3(b)) and lens transparency of the camera while minimizing the interference of the arc, smoke, etc. In this case, the camera objective lens has not been specially treated, and the transmittance of different visible light bands remains the same, all at 85%. Considering that the weld pool image spectrum band is close to the red band, and based on the red curve in Figure 3(b), the camera has a higher transmittance at a wavelength of 660 nm. On the basis of the above analysis, the optimal filtering wavelength of the composite filter system is found efficient in maximizing the efficacy of spectral sensitivity and lens transparency for the light spectral and visual sense, as shown in Figure 3(c).

Notably, Fan [25] has made relevant explanations on the selection of the optimal parameters of the filter system and explained that the precise narrowband filter composite technology must be based on the perfect spectral analysis theory. Band center wavelength in this range is primarily chosen based on many tests, with the concept of selection being to determine the optimal center wavelength. Considering that there is no consistent radiation spectral curve analysis of each element in the actual GMAW process, the selection of the central wavelength of the band in this interval needs to be based on certain experiments to find the appropriate central wavelength. The influence of the spatter and arc light on the CCD image quality cannot be fully eliminated, however, by employing a combined mode of visual-sensing cut-off compound dimming-near infrared narrowband filter. Consequently, further work must be done to develop and enhance the image preprocessing algorithm for the arc-welding environment. It lays the groundwork for rapidly and precisely extracting the geometric properties of the weld pool.

2.4. Experiment Conditions

Herein, for the purposes of feature processing and geometric feature extraction, 99.9% CO2 serves as the shielding gas for a welding robot while performing GMAW-CO2 gas-shielded flat butt welding of the hull structure. To ensure the stability of the welding process during the test welding, the polarity of the current shall be direct current (DC) reverse connection, and a comprehensive database of the optimal welding processes can be generated by evaluating the quality of various types of welding. See Table 3 for the details.

3. Knowledge Modelling for Weld Pool Image Processing

3.1. FCM-Sobel Fusion Model

Image processing based on the welding vision dynamic process relies heavily on the identification and extraction of image edges, both of which are crucial to comprehend the whole image. Edge detection technology is one of the most extensively utilized approaches in the areas of computer vision and image processing. Used extensively in processes like contour extraction, feature identification, and texture analysis, it is a crucial method of extracting features from images. For many complicated situations in image processing and application research, edge detection is an essential first step, and the quality of edge detection is largely determined by the employed algorithm.

The Sobel operator is one of the commonly employed operators in edge detection methods [33]. Due to its high precision in placing edges and its effectiveness in dealing with images that undergo a grayscale gradual change or undergo chaotic image processing, it has been widely employed in image edge detection in various fields. The main idea is to perform gradient detection for pixels in the horizontal and vertical directions of the image to calculate the approximation of the grayscale of the image brightness function.

Figure 4 depicts the horizontal and vertical gradient templates used to calculate the derivatives in the and directions, with assumed as a resolution of image .

Its mathematical expression can be expressed in the following equations:

Its amplitude reflects the highest rate of strength change at the measurement site , whilst the direction corresponds to the greatest direction of strength increase. The magnitude and direction of the gradient can be expressed in Equations (3) and (4), respectively:

Given that both noise and edge have a tendency toward rapid fluctuations in the gray level, it is important to smooth the image before attempting edge extraction. If is taken as that pixel point of the area centered on coordinate , the center point of the area can be assigned as the average of these 9 pixels. Then, the image-smoothing calculation equation can be expressed by the following equation:

Equations (1) and (2) can be used to determine the gray values in the horizontal and vertical directions, respectively, while Equation (3) can be employed to get the amplitude of the pixel gradient. Ultimately, a reasonable threshold is selected, and the obtained highest gradient amplitude is compared with the , with the pixel regarded as the component of the image’s edge if . The values are significantly different. Otherwise, the pixel is not an edge of the image [33]. Evidently, the prerequisite is to understand the specific value for the Sobel operator in edge detection processing of an image. Large-scale simulations or repeating experiments, as recommended by experimenters, are crucial for the achievement of optimal findings during the design stage of a test scheme. However, the fuzzy, edge-uncertain image of the welding process vision cannot be identified by traditional mathematics. The emergence of the fuzzy theory provides a theoretical support to solve the problem [16].

Among the fuzzy-clustering segmentation algorithms, the FCM-clustering image segmentation algorithm is the most extensively used, the most perfect, and the most famous one [34]. FCM is a fuzzy classification algorithm that uses membership to determine the degree to which each data point belongs to a certain cluster. In 1973, Bezdek [35] proposed this algorithm as an improvement of the early hard C-means (HCM) clustering method. The early HCM algorithm is hard for data division, while FCM is a flexible fuzzy division method. It is an important method that analyzes and models important data with the fuzzy theory at present. The sample-category uncertainty description has been devised, allowing for a more accurate reflection of reality. The approach has both a theoretical and a practical significance, and it has been effectively implemented in many domains, such as the analysis of enormous amounts of data, data mining, image segmentation, and pattern recognition [36].

Assuming that the samples are divided into data sets and is the number of samples, the cluster center of each group can then be calculated. Based on its membership degree, FCM provides each sample point with a value between indicating its degree of belonging to each group. Equation (6) expresses the constraint requirement imposed by the normalization of the FCM algorithm and states that the summation of the membership of samples in each cluster is 1.

Then, the FCM objective function defined by the membership function can be written as the following equation: where denotes the membership degree; , the weighting index (); , the Euclidean distance between the cluster center and the data point; , the cluster center; ; and , the number of cluster centers. The Lagrangian multiplier approach is used to minimize the objective function of FCM, enabling it to converge on the best solution allowed by its constraints. Herein, is introduced, and then, Equation (8) is obtained.

Besides, the unconstrained minimization problem shown by Equation (9) is similar to the one shown by Equation (8).

Equation (9) is set to 0 after its derivation yields the following relationship between equations.

Combining Equations (10) and (11), the value containing parameter can be obtained via calculation, which can be expressed as the following equation:

From the constraint condition of Equation (6), the equation relation containing can be obtained as the following equation:

Substituting Equation (13) into Equation (12) yields Equation (14), which specifies the conditions under which the membership function can be updated after the removal of :

The first term in Equation (8) represents a vectorized connection based on the sample data set .

It can be observed from Equation (15) that if in the case of only one item containing , let the partial derivative of to the be 0, and solve the cluster center , which can be expressed as the following equation:

The known circumstances for Equation (6) can be used to derive the relationship of the following equation:

By solving Equation (17), the necessary conditions for the updating can be obtained as shown in the following equation:

When the objective function gets the minimum value, the is adjusted to the best, and the class label described by each pixel can be obtained from the membership matrix at this time. Then, the image segmentation result can be obtained. For this reason, the segmentation is performed by combining the class label and membership of each pixel neighborhood in space, and then, there is a spatial function as shown by the following equation:

where represents the spatial neighborhood information of the pixel in the image. In this experimental testing process, the best size neighborhood is selected, and the is used as the initial value to iterate and obtain a new . Then, the membership update conditions are modified as shown by the following equation:

According to Equations (18) and (20), the FCM algorithm, which continually optimizes and updates the calculation of and , should satisfy two requirements to work. This indicates that the value of the membership function is closer to 1, allowing the outcome to be infinitely close to the value of the objective function when the experimental conditions are fulfilled, and the update is halted when the experimental requirements are met. Equation (21) represents the termination condition: where refers to the number of iterations and is an extremely small constant to represent the iteration error threshold; that is, when the and being updated reach the maximum change of membership degree and the preceding iteration update does not exceed the iteration error threshold, it will reach a better state and eventually converge to the local minimum point of . Otherwise, , and it will continue to update the value of and . In image segmentation, the FCM membership matrix represents the pixels in a gray level image, and belongs to a gray level value center . Determining the highest degree of membership of a pixel to the center of a gray level value enables straightforward segmentation of the pixel into regions with the same gray value.

Based on the previous analysis, in the visual image processing of the welding process, the two approaches are merged based on the notion of the spatial information FCM-clustering algorithm and the Sobel operator. Assume that the gray image matrix is converted into an -dimensional vector, as the data input; the following steps are given for the image-processing algorithm of the FCM fusion Sobel operator (see Algorithm 1).

Function FCM-Sobel ()
Input: , , ,
Output: a membership matrix , image edge
Step 1: given a set of points , and set iteration count ;
   Randomly initialize cluster centers , .
Step 2: apply Equation (9) to calculate FCM objective function .
Step 3: according to , use Equation (14) to calculate membership matrix .
Step 4: fix .
   Utilize Equation (18) to revise cluster center .
Step 5: amend
   Use Equation (20) to revise membership function .
Step 6: compare and .
   if , then
    Obtained and , iteration over;
    Continue to step 7;
   else do
      , return to step 3.
   end.
Step 7: according to the principle of maximum membership.
   Pixel is classified into the class that satisfies the conditions of ,
   .
Step 8: image segmentation is over.
   After division class, respectively label the grayscale of the same and different pixels to form an image segmentation, .
Step 9: according to , use Equation (5) to smoothen the image.
Step 10: apply Equations (1), (2), and (3) to calculate the value.
Step 11: judging image edge.
   If or or is met, then
     is the edge point.
   else do
    Repeat step 10.
   end.
3.2. Method Analysis of Image Information Extraction

Weld quality information, including weld appearance and internal and exterior flaws, can be reflected immediately in the shape and geometric properties of the weld pool, since it has the greatest impact on weld dimensions including the width, length, and depth. A precise and accurate picture of the weld pool can be obtained from a visual-sensing intelligent monitoring system of the welding process, and this image can be further utilized as the foundation for a knowledge model. Some papers have proposed different edge detection methods for the weld pool image. For instance, Matsunawa et al. [37] proposed a new MAG weld pool edge extraction algorithm based on the component tree model, Gallo et al. [38] proposed an image-processing method based on the grayscale characteristics of the weld seam, and Cui et al. [39] proposed an image-processing method based on a composite edge detection operator. Even though these methods are beneficial to image processing to a certain extent, many of them use traditional operators for image processing, without considering the actual situation generated by the welding process and the uncertainty of the image itself, and lack the anti-interference ability to the welding process noise.

Based on an investigation of prevailing image-processing methods and CO2 gas-shielded arc-welding characteristics, this paper introduces the FCM-clustering method to enhance image characteristics and enhance and segment the image to emphasize the image contour. Fuzziness and ambiguity in the weld pool image created during welding are indeed examined. Additionally, considering the advantages of the edge detection method using the Sobel operator in processing images containing gradual grayscale changes and more noises, a weld pool contour detection method is developed to extract geometric features from the image of a weld pool, which utilizes pixel spatial neighborhood information and the FCM-clustering algorithm in combination with the Sobel operator. The specific process is shown in Figure 5.

Based on the previous analysis, the specific process ideas of this method can be divided into several steps. Firstly, the source image information in the welding process is obtained through the CCD acquisition system. Secondly, grayscale image conversion is based on each pixel in the image, and only the maximum and minimum RGB values are taken, providing brighter areas and darker areas of the image, respectively. Then, due to the random uncertainty of the position of the black-and-white light-and-dark point noise generated during camera imaging, image transmission, and other processes, the image should be filtered before extracting the image features. The commonly employed filtering method selected for this experiment is median filtering. To clearly display the image-filtering effect, select some salt particles, and black pepper particles are randomly sprinkled on the image and mixed with it. The corresponding median value in the image pixels is calculated, and the original value of the point is replaced by the intermediate value; that is, a window is used to move along the image. The median value of all pixels in the window is used to replace the pixel gray value at the center of the window, which is completed by the filtering effect. Furthermore, pixels in the image’s background and foreground are grouped using pixel spatial neighborhood information FCM clustering as part of the segmentation procedure. Following this, a window is positioned, and its average value relative to the segmented image around it is determined. With this technique, the image’s brightness transitions smoothly, the sharp gradient is mitigated, and the overall image quality is enhanced, suggesting that the picture has been softened around the weld pool’s edge for emphasis. Finally, the Sobel operator is used to calculate the horizontal and vertical gradient approximations of the smoothed image, and then, the edge points of the image are obtained. To extract the geometric characteristics of the weld pool, the size of the bounding rectangle of the edge is acquired using the Hough linear transformation, which is then computed in conjunction with the image calibration outcomes.

The geometric feature information of the weld pool mainly includes the maximum weld width, length, circumference, area, and drag angle. By virtue of these characteristics, appropriate technical parameters for monitoring and regulating the dynamic process of ship GMAW welding in real time can be achieved. Furthermore, to evaluate the stability and reliability of the feature extraction method for analyzing weld pool morphological characteristics during dynamic welding processes, a comparative experiment is correspondingly designed between two classical edge detection methods, the same as Sobel and Otsu, and the classic -means clustering algorithm. In addition, there presents a critical evaluation of the empirical findings from studies that combine Sobel operators with FCM-clustering algorithms. The next chapter will focus on analyzing this component.

4. Image Segmentation and Edge Detection of Weld Pool

4.1. Image Preprocessing

In the GMAW welding process of the hull structure, the acquired weld pool images are subject to various kinds of interference, such as arc light, spatter, and smoke to different extents. Consequently, discrete and isolated pixels appear in the image, and these kinds of interference are obviously random unknown noises. Currently, median-filtering technology in morphology is commonly used to denoise them. However, these noises will weaken after image grayscale preprocessing, and the occurrence of weak noise will significantly reduce the filtering denoising effect. This will cause the sudden increase or decrease of the originally uniform or continuously changing grayscale values, forming some false and blurred edges or contours and even interfering with the subsequent analysis and processing of image information. By objectively analyzing the collected weld pool images and artificially adding noise processing by equivalently treating random noise as salt-and-pepper noise, that is, evaluating the image’s signal-to-noise ratio and generating pixels with values of 0 or 255 randomly distributed in various pixel positions, the effect of filtering out isolated small noises can be achieved while the original information after denoising will be retained. Furthermore, it can completely preserve the edge information of the image without being affected by filtering, thereby effectively solving the problem of incomplete noise reduction for small-scale noises and providing technical support for the subsequent extraction of weld pool contours. The processing results as shown in Figure 6.

4.2. Image Segmentation Using Traditional Algorithms

It is commonly known that the purpose of segmentation is to separate the target region from the surrounding background and highlight the edge information of the target region, facilitating image edge extraction to be a simpler process. Nevertheless, existing image segmentation research is subject to limitations that prevent it from satisfying the needs of the present study. One of these constraints is that the selection of picture thresholds is still dependent on the experience knowledge of researchers. It can be seen from Figure 7(a) that the key to the traditional Sobel operator for image segmentation is to determine the threshold value . If an appropriate threshold value can be determined, the image can be accurately separated to highlight the edge effect. An absence of practical experience with threshold value selection is evident in the example, which results in consequences including a poor separation of image foreground and background, a significant error, and a lack of continuity in edge extraction.

With the advancement of image-processing research, the Otsu algorithm is typically regarded as the best algorithm for threshold selection in image segmentation. Given that it is simple, steady, and productive in computation, it has been extensively employed in practice. A binary algorithm with a global basis is its underlying principle, and how the image ’s gray characteristics are distributed is explored. Suppose that the proportion of the number of pixels in the foreground of the image to the whole image is , and its average gray level is , then the background will be and , respectively, and the interclass variance is recorded as . The detection of the threshold that maximizes variation across classes allows for pixel-level image segmentation, which explains how the weld pool edge is extracted. However, when the size ratio between the foreground and background of the target image is rather different, and the variance function between classes has two or more peaks, it will be more difficult to find the maximum threshold between classes, which has serious interference from the background noise, thus resulting in an unsatisfactory image-processing effect, as shown in Figure 7(b). However, as demonstrated in Figure 7, it is still challenging to fulfil the needs of weld pool image processing in this study without resorting to experience knowledge, even though this approach provides a processing result better than that of the traditional Sobel operator.

4.3. Image Segmentation Using Clustering Methods

Aiming at the analysis of the processing results of the traditional algorithm, the paper learns the experiential knowledge of the relevant image processing and explains the problem of threshold selection and the uncertainty of the image itself and other factors, thus introducing the classical -means clustering method to effectively segment the edges of the weld pool image in this example. In this approach, in order to generate a sample feature vector space, the grayscale or RGB values of each pixel in an image are transformed into sample data. points are then chosen as the cluster centers, and the distances between samples are computed. The impact of the image region segmentation can be achieved iteratively by transforming the image segmentation challenge into data set clustering. Figure 8(a) highlights that the outcome of the weld pool image segmentation is superior to that of the conventional technique employed in the previous chapter, even though the gray value of the image has been altered by the presence of the background. Besides, since the number of clusters is limited by the variable, target-background separation remains challenging for images with high levels of noise or fuzziness. Furthermore, limited by the small number of clusters required by the value, the processing impact fails to fulfil the experimental conditions.

Similarly, the FCM algorithm is a soft fuzzy-clustering image segmentation algorithm that extends the -means clustering method. Because of its organic integration of fuzzy theory and image segmentation as well as its higher-level suitability for representing the ambiguous information contained within an image, it has sparked the interest of scholars. Implementing an image segmentation method based on spatial information FCM clustering improves the segmentation effect and stability of current FCM-based techniques for noisy images. As an initial point, the algorithm divides the pixel points of the segmented image into classes based on their gray value. The membership degree of randomly picked pixel points in the original cluster is first computed, followed by their position in the group with the highest degree of membership. With the calculation of the cluster centers of various types at this time, the membership matrix is updated according to the new cluster center. Then, the classes are redivided according to the membership matrix, the iteration is then conducted until the objective function converges, and the image segmentation result is finally obtained. This paper rewrites the objective function to take into account image pixel information and assesses the efficiency of the existing segmentation with the pixel space information included. Besides, it combines the neighborhood points of each pixel and the Sobel convolution template to calculate the gradient approximation value, spot the image edge points, and then realize the extraction of the weld pool edge. This method, as depicted in Figure 8(b), performs better in predicting the edges of weld pools than other algorithms. To facilitate the analysis of whether the edge extraction is optimal, the extracted edge will be fused with the original gray image of the weld pool in the following article. Afterwards, whether it is conducive to the subsequent application of the weld pool geometric feature extraction will be further analyzed.

4.4. Feature Fusion of Image Edges

Based on the edge fusion of the traditional Sobel differential operator [40] in the experiment, the processing result is shown in Figure 9(a), where it can be observed that the lack of smooth processing still exposes the result to a discontinuity in the edge extraction of the segmented weld pool image. In addition, the selection of the threshold value is crucial to the success of this technique, and the inclusion of arc smoke in the contour fusion has an impact on the following computation of characteristic parameters, such as the weld width and length. Figure 9(b) shows the fusion result of the weld pool edge based on the Otsu algorithm. This approach outperforms the Sobel operator in edge detection for the segmented target area image. From an outside perspective, however, the interference of the arc light, smoke, and other elements during the welding acquisition process makes this technique unfavorable to image processing. Given these internal constraints, the algorithm places a premium on prior knowledge in the case of determining the upper bound of interclass variance. This approach is cumbersome and time-consuming and lacks certain essential practical information, which contributes to substandard edge extraction outcomes. Figures 9(c) and 9(d) present the fusion results of the weld pool edge by the -means clustering and FCM-clustering fusion Sobel algorithm, respectively. As shown by the result analysis of the edge fusion, the typical -means clustering approach for detecting the weld pool edge is subpar, considering its sensitivity to noise and anomalous points and its reliance on previous knowledge to determine the value of the number of clusters. FCM has enhanced the -means algorithm, which is a flexible fuzzy-clustering division approach. Optimization is achieved by using the membership function. After optimization, it combines well with the Sobel operator to handle increased levels of background noise. Specifically, it removes the edge of the weld pool without having to pick a Sobel threshold. Figure 9 demonstrates the superiority of this very method over competing algorithms in terms of the consistency and precision of edge detection and fusion, and it also demonstrates the remarkable resistance of this method to noise. The geometric features of the weld pool have been extracted comprehensively and are smooth, as is required for the experiment, and a solid groundwork has been formed for future studies on the practical use of this method.

4.5. Validation of Contour Extraction

The weld pool edge plays a significant role in the whole, as it determines the general shapes of the weld pool. After the segmentation of the weld pool image, the region of interest obtained from the binary image is relatively clear, but there is still a small amount of noise. To successfully collect the valuable edge information of the weld pool and to obtain correct geometric feature information of the weld pool picture, it is necessary to further identify and process the edge of the weld pool image, so as to provide the groundwork for future studies. At present, the typical methods for edge detection of weld pool images commonly used by researchers include the Sobel, Robert, and Canny detection operators [41]. As a segmentation condition, a suitable threshold value is typically provided by the operator while utilizing these operators for edge detection. However, for different images, different thresholds need to be selected. Therefore, one of the obstacles in image-processing research is to determine the best threshold condition; fortunately, the emergence of fuzzy-clustering analysis provides scholars with an opportunity to solve this problem. Since it is commonly considered that the edge of a picture is fuzzy during processing, this paper focuses on the uncertainty, fuzziness, and other elements of the image. In addition, fuzzy theoretical knowledge and experience knowledge are presented, and a visual edge detection approach for weld pools is implemented based on the image pixel space information FCM-clustering fusion Sobel operator. Then, using the dualities of the Hough transform between points and lines, an external rectangle containing information about its contour is generated. Using the previously calibrated coordinate data, the concluding step is to calculate the geometric feature parameters. Please refer to Figure 10 for a comprehensive view on how to extract the edge of a weld pool.

In order to offer further validation of the methods utilized in the article, image edge-processing results of the weld pool produced at varying currents, voltages, and welding speeds are shown in Figure 11. The Sobel operator and Otsu algorithm are used to extract the edge characteristics of the weld pool area, and their approximate edges can be efficiently extracted when the acquired image is radiated independently in the weld pool area to differentiate their brightness from the backdrop color. Unfortunately, welding smoke still exerts an impact and expands the size of the closed edge region relative to the weld pool area. Moreover, the edges are not smooth enough, and there remain a few noise points. With the advancement of image segmentation functioning as the turning point of image edge feature extraction, this paper presents the -means clustering technique to effectively segment the weld pool area and then acquire the image edge features, all in line with the characteristics of the picture. The result is noticeably superior to the previous two approaches. Though effective, -means is prone to oversegmentation because it places too much emphasis on prior experiences. Considering this research, this paper is capable of effectively extracting and fusing the edge contour into the original gray image for comparison, but the welding process parameters, arc light, spatter, and other interference levels are inconsistent. This is accomplished by utilizing merits of the spatial information FCM-clustering algorithm and Sobel. Figure 11 (iv) demonstrates that its processing implications are considerable, and the extraction accuracy is strong, allowing it to satisfy experimental requirements.

In summary, due to the interference of the arc light, smoke, and spatter during the GMAW welding process of the ship hull structure, the collected weld pool images are affected by random noise to varying degrees. Considering the fuzzy and uncertain random factors of the research object, an image-processing fusion method is proposed based on FCM with fuzzy clustering as the core and the Sobel operator. The fuzzy membership function is used to characterize the pixel location classification of the image segmentation, and clustering analysis is applied to the membership matrix for the convergence iteration and objective function optimization. This pixel classification information establishes a balanced segmentation function to optimize the identification of Sobel operator parameters, further combining convolution sum operations to solve gradient changes and obtain edge pixel coordinates, thus achieving effective segmentation and accurate edge extraction of weld pool images. Compared with traditional methods, this approach can effectively suppress the pseudoedge phenomenon caused by the unclear edge detection brought about by the overreliance on prior knowledge in the Otsu algorithm, overcome the defects of the Sobel operator being susceptible to external noise interference and affecting edge positioning due to threshold changes, and avoid the problem of excessive segmentation caused by the limitation of accuracy in the -means method due to the unknown empirical value of the cluster number . The simulation results of edge segmentation and extraction show that the proposed fusion method successfully provides complete, smooth, and continuous image edge information that is more consistent with the original image and has stronger noise resistance.

5. Geometric Feature Extraction

Extracting the geometric information of the weld pool edge is the ultimate objective of weld pool image processing. After capturing and processing images, the intricate edge structure of the weld pool remains discernible. However, calculating the geometric size characteristics of the weld pool is necessary for future research into the connection between the weld pool form and welding process parameters and for real-time surveillance of the weld pool status. Successful weld pool edge information is gathered by the sequence of image-processing stages. This experiment is aimed at establishing a correlation between the size of the weld pool and the pixels that line its edge. In order to do this, the geometric properties of the front weld pool, such as its width and length, are extracted after screening the smallest circumscribed rectangle of the maximum closed area using the Hough transform analysis technique (i.e., the weld pool edge).

5.1. Geometric Calibration Modelling

The geometric shape and size of the weld pool can reflect the quality of welding formation. Figure 12 provides a detailed description of the geometric parameters [42, 43] of the front of the weld pool. After image processing, the sum of all the pixels included in the whole edge of the weld pool from the head of the weld pool to the tail of the weld pool is the corresponding weld pool area . The amount of pixels that constitute the contour line of the weld pool edge is equal to the weld pool’s perimeter . The maximum width of the weld pool is obtained by calculating the number of pixels contained in the maximum width of the weld pool perpendicular to the welding direction in the transverse image. reflects the width of the weld. The front weld pool length is obtained by calculating the number of pixels contained in the maximum length of the weld pool parallel to the welding direction. Given that the position of the lens is perpendicular to the direction of the welding seam during longitudinal shooting, the length value extracted from the longitudinal image is close to the truth value. When horizontal image processing is performed, the back drag angle of the weld pool is the angle created between the furthest point at the end of the weld pool and the maximum horizontal distance ; the mathematical expression is shown in Equation (22). In addition to reflecting the slender width of the weld pool, it furthermore reflects variations in the edge curvature.

Intuitively understanding the location and shape of the weld pool is possible with the resulting image. However, the shape of the weld pool in the image has geometric deformation and size change because of the angle at which the CCD camera captures the image. Calibration of the weld pool image is required to transform the data represented by its number of pixels into its physical dimensions. Zhang [44] introduced a versatile novel technique for generating a radial lens distortion model within the context of a camera calibration approach, one that is simple to calibrate and offers a convincing theoretical underpinning for precise camera calibration. Wu [45] took a picture of the calibration object through a camera and directly used a ruler for calibration. The scale factor between the actual size of the scale object and the distance between image pixels is determined without the requirement for quantifying the length of the calibration object individually. For the sake of this experiment’s calibration, this technique gives inspiration. Based on this, a system that exploits passive vision technology to continuously monitor and gather images of a weld pool is hereby presented. The horizontal and vertical pixels are altered because of the angle formed by the laboratory-based CCD camera with the weldment during the experiment. Therefore, the calibration coefficients of the -axis and -axis of the visual image coordinate system need to be calibrated, and what this experiment only needs to know is the size of the weld pool instead of its absolute coordinates. The camera is calibrated using a basic linear model, as shown by this characteristic. In actual imaging, the lens of the CCD camera is not a wide-angle lens, and the shooting range is henceforth constrained and its viewing angle is restricted. The CCD camera is in the same relative position as the standard processing, and the calibration panel is captured using the same parameters.

5.2. Geometric-Pixel Conversion Calculation

Typically, as part of the calibration process for CCD cameras, an object of known size and shape is mounted in front of the camera, the CCD captures an image of the object, and the necessary parameters are calculated using the equation. The calibration board used in this experiment is black and white, and each row and each column are composed of a planar array panel consisting of square blocks, as shown in Figure 13. The scale coefficient between the coordinate system of the authentic workpiece and the coordinate system of the image is a parameter that requires calibration, which is applied both horizontally (in the vertical welding direction) and vertically (along the welding direction). Set the -axis to be perpendicular to the welding direction and the -axis to be parallel to the welding direction [46], where and represent the calibration coefficients for the - and -axes, respectively. The calibration panel of the black and white lattice with a known side length is placed on the working platform, and the calibrated target image is then captured by the CCD camera. The image pixels and are calculated and occupied by certain sizes and in two directions in the image, respectively. On the target image, the information is searched line by line from top to bottom and from left to right. The first corner of the block is marked as and then searched line by line from bottom to top to get the next corner, which is marked as . The coordinates of several checkerboard intersections are shown in Table S1 (Supplementary Materials). In a comparable way, the information is searched from left to right and from right to left column by column to obtain the leftmost and rightmost corner points of the calibration object box, which are denoted as and , respectively.

The actual size of the side length of the square in the known calibration object is , from which the calibrated scale coefficient is obtained as shown in Equation (23) and the unit is mm/pixel:

Then, in accordance with its calibration concept, the corresponding scale coefficients of the perimeter and area of the weld pool can be calibrated in the same way, which are recorded as and , respectively, as shown in the following equation.

The outcomes of the experiments demonstrate the effectiveness and simplicity of this approach. Table 4 displays the consequences of this calibration.

5.3. Comparison of Extraction Accuracy

Once the calibration coefficients are accomplished in both the horizontal and vertical dimensions, the actual distance between two points in the image directions can be determined using the conversion equation. The maximum width and length of the weld pool utilized in the calculation are those shown above. Calibration conversion is shown in Equation (25), and the unit is mm: where and represent the pixel distance of the maximum width and length corresponding to the edge contour of the weld pool, respectively, and and are the calculated dimensions of the maximum width and length of the weld pool, respectively. To confirm the viability of the vision-sensing system and image-processing algorithm, the weld width is acquired by operating a welding test that adjusts the welding speed, and the calculated dimension of the weld width is obtained by converting the calibration scale factor formula. Meanwhile, in order to evaluate and compare the performance of knowledge modelling, it is also necessary to measure the ground-truth actual value of the weld size.

The instrument used to measure the actual weld seam size is a metallurgical microscope (research level), as shown in Figure 14(a), which adopts an infinity optical system and can achieve functions such as measuring the size of internal structures of metals and alloys and identifying defects. Besides, it has a long-worked distance, high color revivification degree, and reliable imaging optical quality. Equipped with corresponding measurement analysis software, electric stage, and focus drive, it meets the measurement needs of various types of metals in different forms. In this case, the formed weld seam of a 6 mm low-carbon steel plate for the hull structure is used as the measurement object, and the welding process parameters include welding current , arc voltage , and welding speed , 26, and 29 cm/min. The actual position corresponding to the weld pool at the specific moment can be determined by using the conversion relationship between the length of the formed weld seam and the welding speed. The actual position of the weld pool corresponding to the test image is determined by independently marking the baseline, and the weld seam contour line is observed and extracted based on the edge detection function. The actual weld width of the weld pool at the corresponding position is automatically obtained by adding auxiliary lines, as shown in Figures 14(b) and 14(c).

The forming features of the weld size are an important premise to ensure welding quality, such as the front weld width, reinforcement, and weld depth. However, in practical engineering applications, the complexity of the ship structure and various processes as well as the limitation of the environmental conditions make it difficult to directly observe the three-dimensional geometric characteristics of the weld pool in real time. Therefore, the performance evaluation of the proposed technique is mainly based on the weld width determination accuracy comparison.

In this case, three sets of welding conditions representing different process parameters, namely, welding current , arc voltage , and welding speed , 26, and 29 cm/min, are selected. The visual information collected in this way serves as the test data set for the image-processing model. To reduce the random impact of the test image selection on the sample size, weld pool images at three typical moments during the welding steady state are selected as the characteristic test data for this type of condition. Then, four processing methods are used to calculate the weld width values, corresponding errors, and accuracy evaluation indexes for the three test images at different moments under each condition and calculate their mathematical expectations The specific calculated values are shown in Table 5.

The parameters of the fusion width in Table 5 are broader than those of the measured weld width, which is attributed to the fact that the contrast between solid and molten low-carbon steel is indiscernible and the arc interference is noticeable. The error between the calculated value and the actually measured value is called absolute error , and the ratio of the absolute error to the actual value is called a relative error, as shown in the following equation.

After calculation, the absolute error between the calculated width by the hereby proposed method and the actual value is better than that of the other three methods. See Figure 15 for an illustration of how the processing impact of this method is within an error margin with only 0.08 mm on a range of welding speeds. In comparison to the conventional Sobel, Otsu, and -means algorithms, to satisfy the welding’s real requirements, the extraction precision of the implemented method is augmented by 77%, 73%, and 40%, respectively. At the same time, it is proven that the visual-sensing system and image-processing algorithm developed to be utilized in a weld pool are both effective and trustworthy.

Considering the considerable discrepancy between the width projected by the other three methods and the real value, it is not worthwhile to continue to calculate the additional geometric parameters of the weld pool. Accordingly, this paper does not conduct further research. The foregoing analysis suggests that the approach approximates the actual weld width more closely, lending support to the idea that it has relevance to merit further experimentation. The area, length, circumference, and trailing angle of a weld pool, among other geometric characteristics, can be determined by employing this technique. As shown in Table 6, this provides valuable benchmark data for online and real-time monitoring of the dynamic condition of the welding process.

After a series of previous analyses, image segmentation and edge detection methods are technical prerequisites and accuracy guarantees for geometric feature extraction of the GMAW weld pool in the ship hull structure. Based on the premise of obtaining effective edge contour information, the present research relies on the Hough transform and a predefined neighborhood window function to traverse and examine the edge line segment set, discriminates the edge set in the rectangular buffer optimization area using angle resolution parameters generated by the corner detection method, and obtains the optimal contour pixel information. Referring to the shooting angle of the visual sensor and the weld seam of the plate, a planar array calibration plate is selected to establish a linear geometric-pixel transform model, solving the coordinate system direction ratio between the collected image pixels and the actual workpiece position. In addition, a function is set up for the change of weld pool size features based on the imaging principle of CCD cameras, and left- and right-view image calibrations are used to achieve lossless conversion between corner pixel coordinates and physical coordinates. By comparing different experimental results with traditional methods, the proposed fusion method achieves the highest accuracy in extracting the weld pool width, with an average error within 0.08 mm, meeting engineering requirements. Similarly, other geometric features of the weld pool, such as area and trailing angle, are calculated to provide research ideas for the development of an intelligent monitoring technology of the hull structure GMAW in the future.

6. Conclusions

The arc light, spatter, smoke, and other phenomena developed in the highly nonlinear process of the hull structure GMAW are the primary focus of this paper, which explores the impediments in image acquisition and feature extraction of the weld pool. In order to optimize and develop the visual sensing of welding dynamics to acquire high-resolution images, a technique of image processing is created using FCM-Sobel fusion, which can accomplish the edge retrieval and accurate extraction of the weld pool. Besides, to compare the algorithms with conventional techniques, various experimental schemes are formulated using the corner detection algorithm in conjunction with the Hough transform technology for extracting geometric characteristics from segmented images. The specific conclusions are as follows: (a)Taking the hull structure GMAW as the research object, a cut-off composite dimming and near-infrared narrowband filtering system is designed and developed by analyzing the characteristic spectral lines of the arc light, and a special CCD camera is selected to build an experimental platform for visual-sensing detection. With the experimental platform, the image of the weld pool acquired has higher sharpness and has less interference from the arc light and smoke(b)The clustering advantage of the FCM-Sobel fusion method can effectively reduce noise interference to achieve accurate segmentation of the weld pool image and calculate its pixel gradient value to improve the accuracy of edge detection. The simulation results demonstrate that under different welding process conditions, in comparison to the established techniques including -means, Otsu, and Sobel, the FCM-Sobel fusion model generates smoother and more continuous image-processing information, with no breakpoints, less noise, and more accurate edge extraction(c)The corner detection technique is used in conjunction with the Hough transform to map image space into parameter space and provide a lossless transformation of pixel coordinates and distance coordinates in parameter space. The simulation results of geometric feature extraction reveal that the FCM-Sobel fusion technique has the maximum extraction accuracy of the weld pool width under varying welding circumstances. The absolute error is maintained under 0.08 mm and fulfils the engineering requirement. The accuracy of the fusion model is increased by 40%, 73%, and 77%, respectively, when compared to that of conventional approaches like -means, Otsu, and Sobel. A superior visual information feedback for the dynamic monitoring of welding quality is also presented through the calculation of the significant characteristic parameters of the weld pool area, perimeter, length, and back angle

Data Availability

The data used to support the findings of this study are available from the corresponding authors upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

All authors are very grateful to the Intelligentized Robotic Welding Technology Laboratory of the Shanghai Jiao Tong University and the Guangxi Engineering Technology Research Centre of Ship Digital Design and Advanced Manufacturing for their experimental guidance and intelligence support during this research. We are sincerely thankful for the generous financial funding of the National Natural Science Foundation of China (nos. 51969001 and 52261044), the Guangxi Innovation-Driven Development Special Fund Project of China (no. GuikeAA18118002-3), the Guangxi Natural Science Foundation of China (nos. 2021GXNSFBA075023 and 2018JJA170110), the Guangxi Science and Technology Plan Project of China (no. GuikeAD18281007), and the Innovation Project of Guangxi Graduate Education (nos. YCBZ2019050 and 2020YCXB01).

Supplementary Materials

Supplementary 1. Table S1: the calibration/image coordinate of the corner point in the checkerboard.

Supplementary 2. Figures S1–S12: the captured data set of test images under different process conditions.