Integrated Computer Vision and Type-2 Fuzzy CMAC Model for Classifying Pilling of Knitted Fabric

: Human visual inspection for classifying the pilling of knitted fabric not only consumes human resources but also causes occupational hazard because of long-term observation using human eyes. This reduces the efﬁciency of the entire operation. To overcome this, an integrated computer vision and type-2 fuzzy cerebellar model articulation controller (T2FCMAC) was devised for classifying the pilling of knitted fabric. First, the fast Fourier transform was used for image preprocessing to strengthen the characteristics of the pilling in the fabric image. The background and the pilling of knitted fabric were then segmented through binary and morphological operations. Characteristics of the pilling on the fabric were extracted by using image topography. A novel T2FCMAC based on the hybrid of group strategy and artiﬁcial bee colony (HGSABC) was proposed to evaluate the pilling grade of knitted fabric. The proposed T2FCMAC classiﬁer embedded a type-2 fuzzy system within a traditional cerebellar model articulation controller (CMAC). The proposed HGSABC learning algorithm was used for adjusting the parameters of T2FCMAC classiﬁers and preventing the fall into a local optimum. A group search strategy was used to obtain balanced search capabilities and improve the performance of the artiﬁcial bee colony algorithm. The experimental results of the ﬁxed and different illuminations indicated that the proposed method exhibited a superior average accuracy (97.3% and 94.6%, respectively) to other methods.


Introduction
The traditional methods for the pilling grade detection of knitted fabric are based on the number of wear-resistant elements and the determination of grades through eye estimation. This manual detection method may cause some errors in the identification of fabric grading because of several reasons, including fatigue or inexperience of the tester. Moreover, visual inspection of the fabric grade is considerably subjective and unreliable. Although experienced inspectors are less likely to cause errors in detection, gaining experience requires high training costs and excessive time.
Previously, the pilling grade of knitted fabric was assessed through manual observation. Manual observation causes numerous misjudgments and reduces efficiency. To overcome the aforementioned problems, several researchers [1][2][3][4][5][6][7] have used image processing methods to improve recognition rates and reduce the cost of employees. Deng et al. [1] used a multiscale two-dimensional dual-tree complex wavelet transform to extract the pilling information from knitted fabric. The Levenberg-Marquardt backpropagation learning rule was used as a classifier to detect the pilling grade of knitted fabric. Saharkhiz et al. [2] adopted a two-dimensional fast Fourier transform (FFT) to process the image and used a low-pass filter to distort the fabric surface texture. The three parameters of pilling, volume, and 1.
A novel T2FCMAC classifier is proposed. It embeds a type-2 fuzzy system within a traditional CMAC.

2.
An efficient hybrid of group strategy and artificial bee colony (HGSABC) learning algorithm is also proposed. The proposed HGSABC was used for adjusting the parameters of T2FCMAC classifier and preventing the fall into a local optimum. A group search strategy was used to obtain balanced search capabilities and improve the performance of the artificial bee colony algorithm. 3.
The fixed and different illumination experiments were implemented in this study. The experimental results indicate that the proposed method exhibited a superior average accuracy rate to other methods.

Image Preprocessing and Feature Extraction
To establish a pilling grade database of knitted fabric, preparation for this study involved the classification of fabric samples. In accordance to the Societe Generale de Surveillance (SGS) international standard test, the fabric was tested using the Martindale wear tester. The condition of the pilling surface was detected after continuously rolling the cloth in the tester. An eye test was then used to assess the pilling grade of the knitted fabric. In the SGS international standard test, the pilling grade of knitted fabric is divided into five grades. Grades 1, 2, 3, 4, and 5 represent very serious, serious, moderate, slight, and no pilling, respectively. Table 1 presents the five pilling grades of knitted fabric. In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.  serious, moderate, slight, and no pilling, respectively. Table 1 presents the five pilling grades of knitted fabric. In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.   In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.   In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.   In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.   In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric. Figure 1 displays the flowchart of the proposed pilling grade classification of knitted fabric. In this study, a computer vision method was devised to detect the pilling grade of the fabric. The method was divided into three primary steps. First, image preprocessing was performed for the input fabric to highlight the characteristics of the fabric pilling in the image. Second, topology was used to collect the characteristics of fabric pilling in images. Finally, the collected features were used as training data for the pilling grades of knitted fabric and applied to the proposed T2FCMAC for training. The trained T2FCMAC classifier was then used to classify the pilling grade of knitted fabric.

Image Preprocessing
Image preprocessing is essential to reduce the complexity caused by the direct use of an original image for image processing. First, the original image was converted into a gray image; that is, three channels were converted into a single channel using the following formula.
f Gray (x, y) = I R (x, y)·0.299 + I G (x, y)·0.587 + I B (x, y)·0.114 (1) where I R (x, y), I G (x, y), and I B (x, y) are the three-channel pixel values of each pixel of the original input image, and (x, y) represents the pixel coordinates of the two-dimensional image. FFT was used to enhance the fabric pilling characteristics in the image. Figure 2 exhibits the flowchart of the FFT. The image was transformed from the spatial domain to the frequency domain. After filter processing and inverse FFT, the image was retransformed from the frequency domain to the spatial domain. This method was used to strengthen the fabric pilling characteristics and balance image brightness.

Image Preprocessing
Image preprocessing is essential to reduce the complexity caused by the direct use of an original image for image processing. First, the original image was converted into a gray image; that is, three channels were converted into a single channel using the following formula.  An N-point one-dimensional Fourier transform generally requires 2 multiplication and addition operators, whereas FFT only requires 2 multiplication in the same operation. The use of FFT can reduce operation time. The conversion formula is as follows: where MN is the size of the 2D image, and (u, v) are the frequency domain coordinates with u = 1, 2, …, M − 1 and v = 1, 2, …, N − 1.
To reduce the misjudgment caused by the background texture of the fabric and enhance the intensity of the image, a low-pass filter was used to retain the low-frequencies of the fabric pilling and attenuate the high-frequencies of the background texture. The filter formula is as follows: where F(u, v) and H(u, v) are the spectrum and filter, respectively. Gaussian filtering was used in this study.
Finally, the filtered spectrum was converted into the original image through an inverse Fourier transform (IFFT). The formula is as follows: where (x, y) are the spatial domain coordinates. Figure 3 shows the image obtained through FFT.
Binary image processing was used in this study to effectively segment the fabric pilling and backgrounds in the images. The binarization formula is as follows: where t is the threshold value to segment the pixel values of the fabric pilling and backgrounds. Figure 4 shows the binarized image. An N-point one-dimensional Fourier transform generally requires n 2 multiplication and addition operators, whereas FFT only requires n log 2 n multiplication in the same operation. The use of FFT can reduce operation time. The conversion formula is as follows: where MN is the size of the 2D image, and (u, v) are the frequency domain coordinates with u = 1, 2, . . . , M − 1 and v = 1, 2, . . . , N − 1.
To reduce the misjudgment caused by the background texture of the fabric and enhance the intensity of the image, a low-pass filter was used to retain the low-frequencies of the fabric pilling and attenuate the high-frequencies of the background texture. The filter formula is as follows: where F(u, v) and H(u, v) are the spectrum and filter, respectively. Gaussian filtering was used in this study.
Finally, the filtered spectrum was converted into the original image through an inverse Fourier transform (IFFT). The formula is as follows: where (x, y) are the spatial domain coordinates. Figure 3 shows the image obtained through FFT.
Binary image processing was used in this study to effectively segment the fabric pilling and backgrounds in the images. The binarization formula is as follows: where t is the threshold value to segment the pixel values of the fabric pilling and backgrounds. Figure 4 shows the binarized image.  After binarization through the aforementioned method, noise was observed; white pixels appeared in the image instead of the fabric pilling. Therefore, a morphological operation was used to eliminate the noise in the image and simultaneously preserve the fabric pilling area. There are four basic morphological operations: dilation, erosion, closing, and opening. Opening was used in this study. Erosion calculations were first performed for the image, and expansion calculations were subsequently performed to eliminate minor noise in the image. The operation formula for this method is as follows:

Feature Extraction
Image preprocessing is primarily performed to strengthen the fabric pilling characteristics and reduce the interference when determining pilling. The features of fabric pilling were extracted from the preprocessed images. The feature extraction was divided into two steps: image topology to detect the fabric pilling areas and extraction of the characteristics of fabric pilling.
The common features used for detecting the pilling grade of the knitted fabric in the images were the number and size of the fabric pills. Image topology includes the inspection of these features [20] using connected component labeling [21] to mark the pilling area. Connected component labeling uses 4-or 8-connected preprocessed binary images, as shown in Figure 5a,b. First, if the pixel value of the self-pixel (X) is 1, then the self-pixel and the neighboring pixels are in the same block and are subsequently marked with the same label. Otherwise, they are marked with different labels. Finally, this well-labeled information the pilling grade of the knitted fabric is distinguished.
The same label represents the same object block. Figure 6 shows an 8-connected area labeling. Figure 6b shows that the 8-connected structures considered not only the four pixels on top, bottom, left, and right of its pixel (X) but also the four pixels on the top-left, top-right, bottom-left, and bottomright of the self-pixel (X). In this study, the shape of the fabric pilling was assumed to be variable. To protect the images from noise, the 8-connected method was used to obtain the fabric pilling areas.  After binarization through the aforementioned method, noise was observed; white pixels appeared in the image instead of the fabric pilling. Therefore, a morphological operation was used to eliminate the noise in the image and simultaneously preserve the fabric pilling area. There are four basic morphological operations: dilation, erosion, closing, and opening. Opening was used in this study. Erosion calculations were first performed for the image, and expansion calculations were subsequently performed to eliminate minor noise in the image. The operation formula for this method is as follows:

Feature Extraction
Image preprocessing is primarily performed to strengthen the fabric pilling characteristics and reduce the interference when determining pilling. The features of fabric pilling were extracted from the preprocessed images. The feature extraction was divided into two steps: image topology to detect the fabric pilling areas and extraction of the characteristics of fabric pilling.
The common features used for detecting the pilling grade of the knitted fabric in the images were the number and size of the fabric pills. Image topology includes the inspection of these features [20] using connected component labeling [21] to mark the pilling area. Connected component labeling uses 4-or 8-connected preprocessed binary images, as shown in Figure 5a,b. First, if the pixel value of the self-pixel (X) is 1, then the self-pixel and the neighboring pixels are in the same block and are subsequently marked with the same label. Otherwise, they are marked with different labels. Finally, this well-labeled information the pilling grade of the knitted fabric is distinguished.
The same label represents the same object block. Figure 6 shows an 8-connected area labeling. Figure 6b shows that the 8-connected structures considered not only the four pixels on top, bottom, left, and right of its pixel (X) but also the four pixels on the top-left, top-right, bottom-left, and bottomright of the self-pixel (X). In this study, the shape of the fabric pilling was assumed to be variable. To protect the images from noise, the 8-connected method was used to obtain the fabric pilling areas. After binarization through the aforementioned method, noise was observed; white pixels appeared in the image instead of the fabric pilling. Therefore, a morphological operation was used to eliminate the noise in the image and simultaneously preserve the fabric pilling area. There are four basic morphological operations: dilation, erosion, closing, and opening. Opening was used in this study. Erosion calculations were first performed for the image, and expansion calculations were subsequently performed to eliminate minor noise in the image. The operation formula for this method is as follows:

Feature Extraction
Image preprocessing is primarily performed to strengthen the fabric pilling characteristics and reduce the interference when determining pilling. The features of fabric pilling were extracted from the preprocessed images. The feature extraction was divided into two steps: image topology to detect the fabric pilling areas and extraction of the characteristics of fabric pilling.
The common features used for detecting the pilling grade of the knitted fabric in the images were the number and size of the fabric pills. Image topology includes the inspection of these features [20] using connected component labeling [21] to mark the pilling area. Connected component labeling uses 4-or 8-connected preprocessed binary images, as shown in Figure 5a,b. First, if the pixel value of the self-pixel (X) is 1, then the self-pixel and the neighboring pixels are in the same block and are subsequently marked with the same label. Otherwise, they are marked with different labels. Finally, this well-labeled information the pilling grade of the knitted fabric is distinguished.
The same label represents the same object block. Figure 6 shows an 8-connected area labeling. Figure 6b shows that the 8-connected structures considered not only the four pixels on top, bottom, left, and right of its pixel (X) but also the four pixels on the top-left, top-right, bottom-left, and bottom-right of the self-pixel (X). In this study, the shape of the fabric pilling was assumed to be variable. To protect the images from noise, the 8-connected method was used to obtain the fabric pilling areas. After the connected area labeling, the number of fabric pills in the image and the area size of each fabric pilling can be obtained. For example, the number of fabric pills is equal to the sum of all object areas marked by the connected area, as shown in Figure 6b; there were three fabric pills. The area of fabric pilling is equivalent to the sum of pixels valued 1 in the binarized image. Therefore, the total area of the fabric pilling in Figure 6b is 65 pixels.

T2FCMAC for Pilling Classification of Knitted Fabric
This section describes the use of the T2FCMAC as a classifier. In the T2FCMAC classifier, inputs and outputs are the fabric pilling characteristics and the fabric grade, respectively. The proposed hybrid of group strategy and artificial bee colony was used to adjust the T2FCMAC classifier parameters. The following two subsections detail the proposed classifier and learning algorithm.

T2FCMAC Classifier
The T2FCMAC classifier is introduced in this subsection. The proposed classifier embedded a type-2 fuzzy system within a traditional CMAC [12]. Moreover, a linear combination function of inputs was used as a consequent part of the IF-THEN rule.
Rule ∶ IF 1 is ̃1 and 2 is ̃2 and … is ̃a nd … and is ̃ where and y are the input and output variables, respectively. ̃ is the linguistic variable of interval type-2 fuzzy set, n is the input dimension, and 0 + ∑ =1 represents the linear combination function of inputs in the consequent layer. The structure of the T2FCMAC classifier is shown in Figure 7. After the connected area labeling, the number of fabric pills in the image and the area size of each fabric pilling can be obtained. For example, the number of fabric pills is equal to the sum of all object areas marked by the connected area, as shown in Figure 6b; there were three fabric pills. The area of fabric pilling is equivalent to the sum of pixels valued 1 in the binarized image. Therefore, the total area of the fabric pilling in Figure 6b is 65 pixels.

T2FCMAC for Pilling Classification of Knitted Fabric
This section describes the use of the T2FCMAC as a classifier. In the T2FCMAC classifier, inputs and outputs are the fabric pilling characteristics and the fabric grade, respectively. The proposed hybrid of group strategy and artificial bee colony was used to adjust the T2FCMAC classifier parameters. The following two subsections detail the proposed classifier and learning algorithm.

T2FCMAC Classifier
The T2FCMAC classifier is introduced in this subsection. The proposed classifier embedded a type-2 fuzzy system within a traditional CMAC [12]. Moreover, a linear combination function of inputs was used as a consequent part of the IF-THEN rule.
Rule ∶ IF 1 is ̃1 and 2 is ̃2 and … is ̃a nd … and is ̃ where and y are the input and output variables, respectively. ̃ is the linguistic variable of interval type-2 fuzzy set, n is the input dimension, and 0 + ∑ =1 represents the linear combination function of inputs in the consequent layer. The structure of the T2FCMAC classifier is shown in Figure 7. After the connected area labeling, the number of fabric pills in the image and the area size of each fabric pilling can be obtained. For example, the number of fabric pills is equal to the sum of all object areas marked by the connected area, as shown in Figure 6b; there were three fabric pills. The area of fabric pilling is equivalent to the sum of pixels valued 1 in the binarized image. Therefore, the total area of the fabric pilling in Figure 6b is 65 pixels.

T2FCMAC for Pilling Classification of Knitted Fabric
This section describes the use of the T2FCMAC as a classifier. In the T2FCMAC classifier, inputs and outputs are the fabric pilling characteristics and the fabric grade, respectively. The proposed hybrid of group strategy and artificial bee colony was used to adjust the T2FCMAC classifier parameters. The following two subsections detail the proposed classifier and learning algorithm.

T2FCMAC Classifier
The T2FCMAC classifier is introduced in this subsection. The proposed classifier embedded a type-2 fuzzy system within a traditional CMAC [12]. Moreover, a linear combination function of inputs was used as a consequent part of the IF-THEN rule.
Rule j : IF X 1 is A 1j and X 2 is A 2j and . . . X i is A ij and . . . and X n is A nj where X i and y are the input and output variables, respectively. A ij is the linguistic variable of interval type-2 fuzzy set, n is the input dimension, and a 0j + n ∑ i=1 a ij X i represents the linear combination function of inputs in the consequent layer. The structure of the T2FCMAC classifier is shown in Figure 7.

Layer 1 (Input Layer):
The input data are imported into the next layer from this layer, and this includes only the transmission of information without any computation.
(1) = , = 1,2, … where n represents the input dimension. Layer 2 (Fuzzification Layer): Fuzzification transforms the input variables into linguistic variables. Each interval type-2 fuzzy set ̃ is defined and shown in Figure 8. The interval type-2 fuzzy set comprises two uncertain means [ 1 , 2 ] and a fixed variance ( σ ). The Gaussian membership function of the interval type-2 fuzzy set ̃ is calculated as follows:

Layer 1 (Input Layer):
The input data are imported into the next layer from this layer, and this includes only the transmission of information without any computation.

Input
(1) i = X i , and i = 1, 2, . . . n (8) where n represents the input dimension. Layer 2 (Fuzzification Layer): Fuzzification transforms the input variables into linguistic variables. Each interval type-2 fuzzy set A ij is defined and shown in Figure 8. The interval type-2 fuzzy set comprises two uncertain means [m 1 , m 2 ] and a fixed variance (σ). The Gaussian membership function of the interval type-2 fuzzy set A ij is calculated as follows: where m ij ∈ [m 1 ij , m 2 ij ] and σ ij represent the mean and variance of the ith input and jth Gaussian membership function, respectively. The membership degree of the Gaussian membership function generates a footprint of uncertainty through parallel movements. The upper (µ A ) and lower (µ A ) membership degrees are as follows: And Thus, the output of layer 2 is an interval valued [µ where n represents the input dimension. Layer 2 (Fuzzification Layer): Fuzzification transforms the input variables into linguistic variables. Each interval type-2 fuzzy set ̃ is defined and shown in Figure 8. The interval type-2 fuzzy set comprises two uncertain means [ 1 , 2 ] and a fixed variance ( σ ). The Gaussian membership function of the interval type-2 fuzzy set ̃ is calculated as follows: 2 ) , ( , ; (1) ), ∈ [ 1 , 2 ] (9) Figure 8. Interval type-2 fuzzy set. Figure 8. Interval type-2 fuzzy set.

Layer 3 (Spatial Firing Layer):
Each node represents the membership value set of the output of layer 2 as a fuzzy hypercube. An algebraic product operation in each node produces the firing strength F (3) j in space, which is illustrated as follows: where f

Layer 4 (Recurrent Layer):
The output of a recurrent node is the temporal firing strength that depends on the current spatial and previous temporal firing strengths, i.e., each node refers to the relevant information between itself and the other nodes. The temporal firing strength is a linear combination function expressed as follows: where o Here, r   (4) j , o j ], respectively. Initially, R q kj is randomly generated between 0 and 1.

Layer 5 (Consequent Layer): Each node is a linear combination function of input variables in this layer. The equation is expressed as follows:
where A i , and a 0 j and a ij represent constant values.
Layer 6 (Defuzzification Layer): Interval type-2 fuzzy sets were reduced to interval type-1 fuzzy sets through a type-reduction operation. A crisp output value [y (6) r , y l ] was obtained through center-of-gravity defuzzification. To reduce the computational complexity of order reduction, centers of sets were employed to implement the reduction process. The method is described as follows:

Layer 7 (Output Layer):
This layer provides the average output of layer 6. The actual output y is derived as follows:

Proposed Hybrid of Group Strategy and Artificial Bee Colony
In this subsection, the proposed hybrid of group strategy and artificial bee colony for adjusting the T2FCMAC classifier parameters is introduced. The Artificial Bee Colony (ABC) algorithm [22] imitates the survival behavior of bees. The algorithm comprises three types of bees: employed, onlooker, and scout. These bees work together to determine the optimal food source. Figure 9 shows the flowchart of the traditional ABC algorithm. First, positions of SN food sources were randomly initialized; this was equivalent to generating SN feasible solution vectors; D represents the dimension of each solution vector. After the food source location was generated, the fitness of each food source was calculated and evaluated. The food source formula is as follows: where X max,j and X min,j , which are equivalent to the range of food sources, are the upper and lower borders of the solution space, respectively. In the traditional ABC algorithm, the number of bee colonies is generally equal to the half of that of the employed and onlooker bees. When the employed bees move to a new food source, the fitness value of their food source is calculated. The new food source is used if its fitness value is superior to the original one; otherwise, the original food source location is retained. The new food source formula is as follows: where i ∈ (1, 2, . . . , SN 2 ), j = (1, 2, . . . , D), and V ij give the new food source location, ∅ ij is a random value between [−1, 1], and X kj is the food source location after randomly selecting one of the employed bees.
An onlooker bee determines the food source location through the roulette method. Equation (20) provides the formula for an onlooker bee searching for new food. However, k in X kj represents the employed bees selected through the roulette method. The roulette method calculates the fitness formula for each food source as follows: where P i and f it i are the probability that the ith food source is selected and the fitness value of the ith food source, respectively. Finally, a threshold value was set to prevent the algorithm from falling into a local optimum. According to Equation (20), the new food source is selected when its location is superior to the original; otherwise, the original food source location is retained and recorded. If the number of recordings reaches a set threshold value, which indicates that the extraction of the food source is not improved, the employed bees are converted to scout bees, and a new food source is produced. In the ABC algorithm, employed bees are used in a global search, whereas onlooker bees are used to search for a superior food source to that found by the employed bees. The scout bees use mutation operations to escape the local solution. In this algorithm, the number of employed bees and onlooker bees is half of the configuration. For different optimization problems, the ratio of two types of bees is not easily adjustable. The improper configuration of the number of the employed and onlooker bees causes global and local search capabilities of the algorithm to be inhomogeneous. Moreover, it reduces the effectiveness of the evolution.
To overcome the aforementioned problems, the concept of group strategy was introduced. A group search mode was used to obtain balanced search capabilities and improve the performance of the ABC algorithm. The proposed hybrid of group strategy and artificial bee colony (HGSABC) is explained as follows.
Step 1: Initialize In the ABC algorithm, employed bees are used in a global search, whereas onlooker bees are used to search for a superior food source to that found by the employed bees. The scout bees use mutation operations to escape the local solution. In this algorithm, the number of employed bees and onlooker bees is half of the configuration. For different optimization problems, the ratio of two types of bees is not easily adjustable. The improper configuration of the number of the employed and onlooker bees causes global and local search capabilities of the algorithm to be inhomogeneous. Moreover, it reduces the effectiveness of the evolution.
To overcome the aforementioned problems, the concept of group strategy was introduced. A group search mode was used to obtain balanced search capabilities and improve the performance of the ABC algorithm. The proposed hybrid of group strategy and artificial bee colony (HGSABC) is explained as follows.

Step 1: Initialize
Parameters of the T2FCMAC classifier were optimized using the HGSABC learning algorithm and were coded as shown in Figure 10. Each bee represents a set of T2FCMAC parameters. Parameters included the mean value (m ij ) and deviation (σ ij ) of a Gaussian membership function in the fuzzification layer; R kj is the weighting value of an interactive feedback mechanism in the temporal layer, and a 0j and a ij are constants of a linear combination function in the consequent layer. . . . Figure 10. Schematic of individual bees.

Step 2: Calculating and ranking fitness values
To calculate the fitness value f of each bee i (B i ), where RMSE represents the root mean square error. Fitness values were sorted from large to small, and the group number of bees was initialized at 0, as shown in Figure 11. Figure 11. Schematic of ranked fitness values.

Step 3: Group strategy
After the fitness values were sorted, the bee with the maximum current fitness value was considered to be the leader of the new group, and the group number was updated. According to the average of the distance difference and the average of the fitness difference between these ungrouped particles and the group leader of group 0, the threshold value of similarity included the threshold values of fitness and distance. The threshold value for distance is calculated as follows:

Step 2: Calculating and ranking fitness values
To calculate the fitness value f of each bee i (B i ), where RMSE represents the root mean square error. Fitness values were sorted from large to small, and the group number of bees was initialized at 0, as shown in Figure 11. . . . Figure 10. Schematic of individual bees.

Step 2: Calculating and ranking fitness values
To calculate the fitness value f of each bee i (B i ), where RMSE represents the root mean square error. Fitness values were sorted from large to small, and the group number of bees was initialized at 0, as shown in Figure 11. Figure 11. Schematic of ranked fitness values.

Step 3: Group strategy
After the fitness values were sorted, the bee with the maximum current fitness value was considered to be the leader of the new group, and the group number was updated. According to the average of the distance difference and the average of the fitness difference between these ungrouped particles and the group leader of group 0, the threshold value of similarity included the threshold values of fitness and distance. The threshold value for distance is calculated as follows: Figure 11. Schematic of ranked fitness values.

Step 3: Group strategy
After the fitness values were sorted, the bee with the maximum current fitness value was considered to be the leader of the new group, and the group number was updated. According to the average of the distance difference and the average of the fitness difference between these ungrouped particles and the group leader of group 0, the threshold value of similarity included the threshold values of fitness and distance. The threshold value for distance is calculated as follows: where g j and NC represent the jth dimension of the gth group leader and the total number of ungrouped particles, respectively; and D, SN, and T g pos represent the encoded dimension, the total number of particles, and the distance threshold value of the gth group, respectively.
The threshold value of fitness is calculated as follows: where f (B g ) and T g f it represent the fitness value of the gth group leader and the fitness threshold value of the gth group, respectively.
The group number of all particles was initially set to 0. The bee with the highest fitness value of the particles was set as the new group leader, and the group number was updated to g, where the initial group number was 1, as shown in Figure 12. The group number of all particles was initially set to 0. The bee with the highest fitness value of the particles was set as the new group leader, and the group number was updated to g, where the initial group number was 1, as shown in Figure 12. The bees remaining unassigned were recalculated using the aforementioned thresholds, ( ) and ( ), and assigned to a group. The distance difference and fitness difference formulae are described as follows: where > and > indicate that the bee is very similar to the current leader. This bee was then assigned to be the current leader group, and its group number was updated. However, if the aforementioned condition was not satisfied, the bees were not assigned to the current group, as shown in Figure 13. Figure 14 illustrates that step 3 was continued until all bees have been grouped. Figure 13. Similar bees assigned to same group. The bees remaining unassigned were recalculated using the aforementioned thresholds, (D i pos ) and (F i f it ), and assigned to a group. The distance difference and fitness difference formulae are described as follows: where T g pos > D i pos and T g f it > F i f it indicate that the bee is very similar to the current leader. This bee was then assigned to be the current leader group, and its group number was updated. However, if the aforementioned condition was not satisfied, the bees were not assigned to the current group, as shown in Figure 13. Figure 14 illustrates that step 3 was continued until all bees have been grouped.
where > and > indicate that the bee is very similar to the current leader. This bee was then assigned to be the current leader group, and its group number was updated. However, if the aforementioned condition was not satisfied, the bees were not assigned to the current group, as shown in Figure 13. Figure 14 illustrates that step 3 was continued until all bees have been grouped.
where > and > indicate that the bee is very similar to the current leader. This bee was then assigned to be the current leader group, and its group number was updated. However, if the aforementioned condition was not satisfied, the bees were not assigned to the current group, as shown in Figure 13. Figure 14 illustrates that step 3 was continued until all bees have been grouped.

Step 4: Update bee position
When all the bees are grouped, a bee searches for the optimal food source. In the traditional algorithm, the bees search for food sources based on a randomly generated direction. To improve the convergence speed and effectiveness of the algorithm, information regarding the global optimal solution was added and used in the search direction. Furthermore, the employed bees moved in the correct direction. The mechanism of random search could be preserved without enabling the algorithm to rapidly converge and easily fall into a local solution. The improved search formula of the employed bee is as follows: where ∅1 ij and ∅2 ij represent the weights of the position of a bee toward the swarm position and are assigned random numbers within [−1, 1]; and X bj and X kj are the current best solution in the bee swarm and a randomly selected employed bee position for all bee colonies, respectively. During the search process of onlooker bees, the group leader (employed bee) must lead the group members (onlooker bees) in group exploration. Therefore, the original exploration method of the onlooker bee was improved because of the direction of leadership. The improved search formula for onlooker bees is shown as follows: where ∅ ij and X Lj are a random number between [−1, 1] and the leader position in the group, respectively. Figure 15 shows the flowchart of the proposed HGSABC algorithm.
the onlooker bee was improved because of the direction of leadership. The improved search formula for onlooker bees is shown as follows: where ∅ and are a random number between [−1, 1] and the leader position in the group, respectively. Figure 15 shows the flowchart of the proposed HGSABC algorithm.

Experimental Results
In this study, FFT was used for the preprocessing of images to intensify the characteristics of fabric pilling and balance image brightness. Moreover, an 8-connected structure was used to extract the fabric pilling area. The T2FCMAC classifier was used for pilling grade detection of the knitted Figure 15.
Flowchart of proposed hybrid of group strategy and artificial bee colony (HGSABC) algorithm.

Experimental Results
In this study, FFT was used for the preprocessing of images to intensify the characteristics of fabric pilling and balance image brightness. Moreover, an 8-connected structure was used to extract the fabric pilling area. The T2FCMAC classifier was used for pilling grade detection of the knitted fabric. In this experiment, six fuzzy hypercubes, five inputs (i.e., five features of fabric pilling), and one output (fabric pilling grade) were used in the T2FCMAC classifier architecture. In the proposed HGSABC, 30 bees were used to adjust the T2FCMAC classifier parameters. In this study, the five characteristics of fabric pilling were the number of pills, area, average area, area ratio, and density of fabric pilling. The average area of fabric pilling ( A average ) is defined as follows: where A f p and N f p represent the area of pilling and the number of fabric pills, respectively. The area ratio of fabric pilling (A ratio ) is defined as follows: where A total represents the size of the entire image. The pilling density (D p ) is defined as follows: To verify the performance of this classifier, 80 records were obtained for each grade of fabric pilling in the database for a total of 320 fabric images (the manufacturer only provided data of fabric images from grade 2 to grade 5). In this experiment, 80% of the images of each grade of pilling in the database were used as training samples, whereas the remaining 20% were used as testing samples. Figure 16b shows that the original image (Figure 16a) was used directly to distinguish the pilling from the background; that is, image preprocessing was not performed. In Figure 16b, the selected pilling is incomplete. In the database generation process, a considerable amount of pilling feature information is lost. Moreover, Figure 16c displays the pilling after image preprocessing. Furthermore, the shape of the selected pilling is reasonably complete. In Figure 16b,c, the pilling distribution on the fabric obtained through image preprocessing is superior to that obtained without preprocessing. Experimental results in Figure 16c show that most of the pills in the original image were eliminated, which indicated that the image preprocessing could effectively determine the pilling area.
where represents the size of the entire image. The pilling density ( ) is defined as follows: To verify the performance of this classifier, 80 records were obtained for each grade of fabric pilling in the database for a total of 320 fabric images (the manufacturer only provided data of fabric images from grade 2 to grade 5). In this experiment, 80% of the images of each grade of pilling in the database were used as training samples, whereas the remaining 20% were used as testing samples. Figure 16b shows that the original image (Figure 16a) was used directly to distinguish the pilling from the background; that is, image preprocessing was not performed. In Figure 16b, the selected pilling is incomplete. In the database generation process, a considerable amount of pilling feature information is lost. Moreover, Figure 16c displays the pilling after image preprocessing. Furthermore, the shape of the selected pilling is reasonably complete. In Figure 16b,c, the pilling distribution on the fabric obtained through image preprocessing is superior to that obtained without preprocessing. Experimental results in Figure 16c show that most of the pills in the original image were eliminated, which indicated that the image preprocessing could effectively determine the pilling area. The T2FCMAC classifier was then used to identify the pilling grade, and 10 verifications were performed in this study. Moreover, 80% of the data of each grade were randomly selected as training samples, whereas 20% were selected as testing samples. In this study, 10 different training and testing datasets were established. The detection results of the proposed T2FCMAC classifier are shown in Table 2, where the total accuracy rate is the average of accuracy rate of 10 testing datasets. The overall accuracy rate was 97.3%. The results satisfy the industry requirements.
Recently, Huang and Fu [23] reported textile grading of fleece based on pilling assessment performed using image processing and machine learning methods. Two image processing methods were used. The first method involved using the discrete Fourier transform combined with Gaussian filtering, and the second method involved using the Daubechies wavelet. Machine learning methods, namely the artificial neural network (ANN) and the support vector machine (SVM), were used to objectively solve the textile grading problem. The T2FCMAC classifier was then used to identify the pilling grade, and 10 verifications were performed in this study. Moreover, 80% of the data of each grade were randomly selected as training samples, whereas 20% were selected as testing samples. In this study, 10 different training and testing datasets were established. The detection results of the proposed T2FCMAC classifier are shown in Table 2, where the total accuracy rate is the average of accuracy rate of 10 testing datasets. The overall accuracy rate was 97.3%. The results satisfy the industry requirements.
Recently, Huang and Fu [23] reported textile grading of fleece based on pilling assessment performed using image processing and machine learning methods. Two image processing methods were used. The first method involved using the discrete Fourier transform combined with Gaussian filtering, and the second method involved using the Daubechies wavelet. Machine learning methods, namely the artificial neural network (ANN) and the support vector machine (SVM), were used to objectively solve the textile grading problem.
Furthermore, the proposed method was compared with other methods [2][3][4][23][24][25]. The experiments were also performed 10 times. 10 training and testing sample sets are the same as Table 2. The second row of Table 3 illustrates a comparison of results of various methods. In the study by Huang and Fu [23], when the Fourier-Gaussian method was used, the classification accuracies of the ANN and SVM were 96.6% and 95.3%, and the overall accuracies of the Daubechies wavelet were 96.3% and 90.9%, respectively. The results indicate that the proposed method exhibits superior average accuracy in fabric pilling grade detection to other methods.
In addition, different illumination experiments were also performed. To verify the performance of the proposed classifier, 160 records were obtained for each grade of fabric pilling in the database for a total of 640 fabric images. In this experiment, we also adopted 80% of the images of each grade of pilling in the database as training samples and the remaining 20% as testing samples. The third row of Table 3 illustrates a comparison of results of various methods. In Table 3, the experiment results show that the recognition rate obtained under different illumination sources is significantly lower than the recognition rate under fixed illumination source.

Conclusions
In this study, an integrated computer vision and T2FCMAC was proposed for classifying the pilling of knitted fabric. Image preprocessing was used to enhance the pilling area of fabrics to reduce background texture interference. The characteristics of pilling were then selected through topology. Finally, a novel T2FCMAC based on the hybrid of group strategy and artificial bee colony (HGSABC) was proposed to evaluate the pilling grade of knitted fabric. The proposed T2FCMAC classifier embedded a type-2 fuzzy system within a traditional CMAC. The proposed HGSABC learning algorithm is used for adjusting the parameters of the T2FCMAC classifier and preventing the fall into a local optimum. A group search strategy was used to obtain balanced search capabilities and improve the performance of the artificial bee colony algorithm. The experimental results of the fixed and different illuminations indicate that the proposed method exhibited a superior average accuracy (97.3% and 94.6%, respectively) to other methods in fabric pilling grade detection. Although the obtained results satisfy the industry requirements, in order to improve accuracy rate, multiple T2FCMAC classifiers fusion using fuzzy integral will be adopted in the future work.