The Prediction of Yarn Elongation of Kenyan Ring-Spun Yarn using Extreme Learning Machines (ELM)

The optimization of the manufacture of cotton yarns involves several processes, while the prediction of yarn quality parameters forms an important area of investigation. This research work concentrated on the prediction of cotton yarn elongation. Cotton lint and yarn samples were collected in textile factories in Kenya. The collected samples were tested under standard testing conditions. Cotton lint parameters, machine parameters and yarn elongation were used to design yarn elongation prediction models. The elongation prediction models used three network training algorithms, including backpropagation (BP), an extreme learning machine (ELM), and a hybrid of diff erential evolution (DE) and an ELM referred to as DE-ELM. The prediction models recorded a mean squared error (mse) value of 0.001 using 11, 43 and 2 neurons in the hidden layer for the BP, ELM and DE-ELM models respectively. The ELM models exhibited faster training speeds than the BP algorithms, but required more neurons in the hidden layer than other models. The DEELM hybrid algorithm was faster than the BP algorithm, but slower than the ELM algorithm.


Introduction
Textile industry is one of the important industries, especially for developing countries like Kenya, due to its labour-intensive nature.Th e industry is vast and produces a variety of products that include fi bres, yarns, fabrics and garments.Th e manufacture of cotton ring-spun yarn involves the assembling together of a group of fi bres and then passing them through a chain of processes that include opening, cleaning, draft ing and twisting to bind the fi bres together, so that they form a continuous strand.Th e raw material

Yarn prediction models
Th e prediction of yarn quality properties using ANN can be accomplished using a single hidden layer feedforward network, whose network parameters include input to hidden layer weights (W 1 ), hidden layer biases (b 1 ), a hidden layer transfer function (f 1 ), hidden layer to output layer weights (W 2 ), output layer biases (b 2 ) and an output layer transfer function (f 2 ) as shown in Figure1.One of the most commonly used techniques to train a feedforward network is the backpropagation algorithm, where the weights and biases are iteratively updated until the set target error is attained.Feedforward networks have been used by several researchers to predict yarn quality properties [1][2][3].According to Huang et al, [4-5] the weights and biases of a single hidden layer network can be randomly selected and then processed through the hidden layer transfer function (f 1 ).Eliminating the output layer function (f 2 ) in a single hidden layer network can render a single hidden layer feedforward network a linear system.Th e hidden layer to output layer weights (W 2 ) can thus be analytically determined using a generalised inverse operation.Such modifi ed networks were given the name extreme learning machines (ELM).Since an ELM chooses the input weights and hidden layer biases randomly, much of the training time traditionally spent in iteratively updating these parameters is saved.However, because the output weights are computed based on the prefi xed input weights and hidden layer biases, there is a possibility that a set of non-optimal or unnecessary input weights and hidden layer biases could be selected.Research by Zhu et al [6] suggested that the problems associated with the ELM algorithms can be minimised by using the DE algorithm for the selection of initial weights and biases.Th is idea was implemented by combining the diff erential evolution (DE) and ELM algorithms to form a hybrid training algorithm.Th e hybrid algorithm, thereaft er referred to as DE-ELM, works as follows: the DE algorithm selects the initial weights by using mutation, crossover and selection processes to search for the most suitable weights and biases.Th e selected weights and biases are sent to the ELM algorithm and used to train the yarn quality prediction model.It can thus be stated here that the diff erence between the operation of the ELM and the DE-ELM algorithms lies in the fact that the initial weights and biases of the ELM algorithm are selected randomly and then used for training, while in the DE-ELM algorithm the initial weights are also randomly selected, but are fi rst put through the DE process (mutation, crossover and selection) before being used for network training.

Materials
Th e aim of this research work was to design a yarn quality prediction model with special emphasis on Kenyan cotton lint and ring-spun yarn.Cotton lint Figure 1: Single hidden layer feedforward network

Inputs
Hidden Layer Output Layer : : : : and ring-spun yarn samples were collected from textile mills in Kenya, with care being taken to ensure that the selected factories were similar in terms of machinery, work culture, technology, quality and maintenance policies.Th is was done with the aim of minimising sample variances that could arise due to inter-factory diff erences.Th e details of the cotton lint and yarn samples collected are presented in Ta-ble1.Th e data used in this research work were compiled aft er the collection and testing of the samples.Th e collected data consisted of 144 samples each having 14 factors that were deemed as input factors, as shown in Table 2.Th e 14 factors included cotton fi bre properties, machine parametersand yarn quality properties.Th e output of the prediction model was yarn elongation.

Methods
Th ree types of prediction models were designed in this research work.As is the practice in network training, the input data were pre-processed.Data pre-processing included the data normalisation process, where the inputs were scaled to fall within a set limit [7].As mentioned earlier, a total of 144 input data were used.Th e data were subdivided into three sets: training, validation and testing sets in a ratio of 4:1:1 respectively.Th is was done randomly.
All the networks used a single hidden layer feedforward network with one output (yarn elongation).Th e BP-trained prediction model was designed using three layers (e.g.input, hidden and output layers), as discussed by Mwasiagi et al [3].Th e BP algorithm referred to as the Levenberg-Marquardt

Prediction of yarn elongation using BP algorithms
Using the BP algorithms, the elongation prediction model was trained starting from 2 neurons in the hidden layer.Th e number of neurons was increased in steps increments of 1 until the set target error (0.001) was attained.Th e results of the elongation prediction model, as presented in Table3, showed that the yarn prediction model attained the set target error when the number of neurons in the hidden layer reached 11.Th at process took 1.58 seconds.Th e fully trained yarn elongation model with 11 neurons in the hidden layer was exposed to the testing data and the R-value of the BP yarn elongation model was 0.894 (Figure 2).Table 4 presents the predicted and measured values of yarn elongation.

Prediction of yarn elongation using the ELM algorithm
Th e experiments for the prediction of yarn elongation using the ELM algorithms were carried out starting with 2 neurons in increments of 1 until the set target error was attained.Th e results of the ELM elongation prediction model, as shown in Table 5, improved rapidly, especially when the number of neurons was varied from 2 to 12 in increments of 1.

Figure 3: Predicted and measured values of the ELM model
Th ereaft er, the change in the mse value was very small, requiring an increase of over 30 neurons to change from an mse value of 0.00941 to 0.00097, when the set target error was attained.Th e elongation prediction model needed 43 neurons in the hidden layer, and for that it took only 0.0317 seconds for training.Th is is much faster than the BP model, which needed 1.58 seconds to attain the set mse level of 0.001.When exposed to the testing data, the 43-neuron elongation prediction model had an R-value of 0.986 (Figure 3).Table 6 presents the predicted and measured values of the ELM elongation model.

Conclusion
Yarn elongation prediction models using BP, ELM and DE-ELM models were designed and trained.Th e performance of the BP algorithms was compared to two non-BP algorithms, i.e.ELM and DE-ELM algorithms, during the prediction of yarn elongation.Th e model recorded an mse value of 0.001 using 11, 43 and 2 neurons in the hidden layer for the BP, ELM and DE-ELM models respectively.Th e ELM models exhibited the fastest training speeds relative to the BP algorithms, but needed more neurons in the hidden layer than the other models.Th e hybrid model (DE-ELM) was second in terms of speed aft er the ELM model.

Figure 2 :
Figure 2: Predicted and measured values of the BP algorithm

Table 1 :
Details of cotton lint and yarn samples R-value), as explained by Ham and Kostanic et al[7]and applied by Mwasiagi et al[3].Th e performance of the BP algorithm was monitored as the number of neurons in the hidden layer was varied from 2 until the network-set mse level of 0.001 was attained.Th e training error, time and R-value were recorded.Th e second training algorithm used to train the elongation prediction models was the ELM algorithm.Th is was done in similar manner as the BP algorithm, until the set mse level was attained.Th e ELM model was improved by using the DE-ELM hybrid algorithm, which used the DE algorithm for the selection of the initial weights and biases.Th e network was trained thereaft er using the ELM model.Th e performance of the DE-ELM yarn quality property prediction models was monitored as the number of generations was varied from 1 to 10 in increments of 1.For comparison purposes, the number of neurons attained by the BP algorithm to achieve the set mse level of 0.001 (which was 11) was varied from 11 to 2 in increments of 1.

Table 3 :
Results of the BP elongation prediction model

Table 4 :
Predicted and measured values for the BP model

Table 5 :
Results of the ELM elongation prediction model

Table 6 :
Predicted and measured values for the ELM model

Table 7 :
Variation of (mse) tr for the DE-ELM elongation model

Table 9 :
Predicted and measured values of yarn elongationTh e ELM algorithm was however faster than the other models.Th e time required by the ELM model for training was over 80 times faster than that needed by the BP model.Th e DE-ELM model provides very good performance with a signifi cantly reduced number of neurons in the hidden layer.Its training speed is, however, slower than that of the ELM model, but still much faster than that of the BP models.Th e DE-ELM elongation prediction model can therefore be considered the optimum model.