Next Article in Journal
Effects of Electrode Materials and Compositions on the Resistance Behavior of Dielectric Elastomer Transducers
Previous Article in Journal
Atmospheric Pressure Plasma Polymerisation of D-Limonene and Its Antimicrobial Activity
Previous Article in Special Issue
Thermal and Mechanical Degradation of Recycled Polylactic Acid Filaments for Three-Dimensional Printing Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chloride Permeability Coefficient Prediction of Rubber Concrete Based on the Improved Machine Learning Technical: Modelling and Performance Evaluation

1
School of Civil Engineering, Architecture and The Environment, Hubei University of Technology, Wuhan 430068, China
2
Wuhan Construction Engineering Company Limited, Wuhan 430056, China
*
Author to whom correspondence should be addressed.
Polymers 2023, 15(2), 308; https://doi.org/10.3390/polym15020308
Submission received: 22 November 2022 / Revised: 3 January 2023 / Accepted: 4 January 2023 / Published: 7 January 2023
(This article belongs to the Special Issue Recycling and Resource Recovery from Polymers II)

Abstract

:
The addition of rubber to concrete improves resistance to chloride ion attacks. Therefore, rapidly determining the chloride permeability coefficient (DCI) of rubber concrete (RC) can contribute to promotion in coastal areas. Most current methods for determining DCI of RC are traditional, which cannot account for multi-factorial effects and suffer from low prediction accuracy. Machine learning (ML) techniques have good non-linear learning capabilities and can consider the effects of multiple factors compared with traditional methods. However, ML models easily fall into the local optimum due to their parameters’ influence. Therefore, a mixed whale optimization algorithm (MWOA) was developed in this paper to optimize ML models. The main strategies are to introduce Tent mapping to expand the search range of the algorithm, to use an adaptive t-distribution dimension-by-dimensional variation strategy to perturb the optimal fitness individual to thereby improve the algorithm’s ability to jump out of the local optimum, and to introduce adaptive weights and adaptive probability threshold values to enhance the adaptive capacity of the algorithm. For this purpose, data were collected from the published literature. Three machine learning models, Extreme Learning Machine (ELM), Random Forest (RF), and Elman Neural Network (ELMAN), were built to predict the DCI of RC, and the three models were optimized using MWOA. The calculations show that the MWOA is effective with the optimized ELM, RF, and ELMAN models improving the prediction accuracy by 54.4%, 62.9%, and 36.4% compared with the initial model. The MWOA-ELM model was found to be the optimal model after a comparative analysis. The accuracy of the multiple linear regression model (MRL) and the traditional mathematical model is calculated to be 87.15% and 85.03%, which is lower than that of the MWOA-ELM model. This indicates that the ML model that is optimized using the improved whale optimization algorithm has better predictive ability than traditional models, providing a new option for predicting the DCI of RC.

1. Introduction

Concrete is one of the most widely used building materials today [1,2]. With the economic boom, the disposal of used tire rubber is becoming a significant issue for urban development [3]. Developing concrete from tire rubber is considered to be a viable technical solution contributing to resource and environmental protection [4,5,6,7,8,9,10]. With the rapid development of marine technology, concrete structures have been widely used in coastal projects which has led to widespread interest in the structural durability of concrete [11,12,13]. The structural durability of concrete plays a vital role in sustainable development. Chloride ion attack is one of the main factors affecting the durability of concrete structures [14,15,16,17]. Chloride ion attack can cause structural failure of concrete, which can lead to many problems. Rubber is a hydrophobic material and has a high resistance to permeation. Chloride ions are transported in concrete using water as a medium, so adding rubber can improve the concrete’s resistance to chloride ions [18,19,20,21,22]. The chloride permeation coefficient (DCI) is one of the three main indicators of concrete durability design, and its permeation process is also an effective way to understand chloride ion attack on concrete [23,24]. It is currently difficult to measure each project’s DCI and permeation rate of concrete [25]. To this end, several empirical models and equations have been developed to determine the DCI of concrete [26,27,28]. However, chloride permeation is a complex and time-consuming process, and these traditional methods cannot consider the influence of multiple factors and have low predictive accuracy [29,30]. Therefore, it is particularly important to find a quick and accurate method for determining the DCI of rubber concrete (RC).
Machine learning (ML) techniques have good non-linear learning capabilities, which can learn from given data and make accurate predictions through complex systems [31]. ML technology has also been applied in the study of RC. For example, Nyarko et al. [10] used 457 sets of RC data to build a 9-3-2 deep neural network (DNN) model to predict the strength of RC and demonstrated that the DNN accuracy was high at 0.9779. Gupta et al. [32] successfully investigated the mechanical properties of rubber concrete at high temperatures using a multi-input and multi-output artificial neural network (ANN) model. Ly et al. [33] used DNN to predict the strength of RC successfully and found the best structure to be 12-16-14-3-1. However, existing research has mainly focused on the mechanical properties of RC. Research on the chloride penetration of RC is limited.
Compared to traditional feed-forward neural networks, Extreme Learning Machine (ELM) can set the input weights randomly and obtain the output weights by least squares. This network has a better generalization capability and faster iteration speed, as the entire iteration process is not required, while the probability of the local extremum and overfitting is lower [34,35]. Since the input weights are random, there is a problem with blind iteration and low accuracy [36]. The random forest (RF) model was proposed in 2018 [37]. The advantages of the RF model are controllable generalization error, fewer parameters to be adjusted, suitability for high-dimensional feature vector spaces, and protection against overfitting to a certain degree. However, its parameters still have randomness and limitations [2,38]. Elman neural network (ELMAN) is similar to artificial neural networks in structure. The difference is that the ELMAN can store information. The output of the previous neuron is stored to guide the prediction of the next neuron. Therefore, the ELMAN has better dynamic prediction capability. However, due to the influence of weights and bias, it is prone to gradient explosion, resulting in lower prediction accuracy [39,40]. The three models have shown extraordinary capabilities in solving prediction problems and have been used in various studies [2,39,40,41,42], but the research applied to RC is still limited. Therefore, these three models were chosen for this study.
Determining the ML model parameters is crucial, as they directly affect the model’s predictive performance [43]. The parameters can be optimized by intelligent algorithms [44]. Intelligent algorithms can improve the model’s predictive performance by searching for optimal parameters [45]. The main idea of the whale optimization algorithm (WOA) is to simulate the humpback whale foraging process, proposed in 2016. The WOA has fewer parameters and is also quite competitive compared to other optimization algorithms [46], so it is widely used in various fields [47,48,49]. However, the standard WOA suffers from slow convergence and local optimal solutions, so improvements are needed. Tent chaotic mappings have a more uniform distribution and are often used to optimize the initial populations of WOA [50,51]. Therefore, in this study, tent chaotic mapping was chosen to optimize the population of WOA. Since the linear weight adjustment strategy of WOA is not adapted to the constantly changing population, an adaptive weight adjustment strategy was introduced. The Probability threshold value was introduced to enhance the algorithm‘s global search ability [52]. At the end of the iteration, to avoid the WOA falling into the local optimum, an adaptive t-distribution dimension-by-dimensional variation strategy was used to perturb the optimal individual to increase the WOA’S ability to jump out of the local optimum [53]. There is limited research on applying improved algorithms to optimize ML models to predict DCI of RC, so this approach is used in this study.
In summary, this study chose the ELM, RF, and ELMAN models to predict the DCI of RC, while the WOA was improved to form a new mixed whale optimization algorithm (MWOA). The MWOA algorithm can improve the accuracy of machine learning models. The optimized model also has advantages over the traditional model. Thus, data were first collected from the published literature and created a database for analysis. Secondly, using the three models to predict the DCI of RC, the three models were optimized using MWOA. Thirdly, we evaluated the model performance and found the optimal model. Fourth, we conducted sensitivity analysis for models. Fifth, the representative prediction result of the optimal model was compared with the actual value. Finally, using the optimal model compared with the traditional model.

2. Database Description and Analysis of Variables

Since the experimental data were collected from the published literature, processing is required to allow the model to learn better. This study collected 88 sets of RC mixed ratio data [22,54,55,56,57,58,59]. Three ML algorithms using nine input variables were used: (1) measurement method, (2) cement content, (3) water reducing agent content, (4) water content, (5) water to ash ratio, (6) fine aggregate content, (7) coarse aggregate content, (8) rubber size, (9) rubber content. The DCI is the only output variable. Due to the different methods used to measure the DCI of RC, this study distinguishes between the measurement methods. Two measurement methods are included in the collected literature, the rapid chloride permeability test (RCPT) method and the rapid chloride migration Test (RCM) method. Therefore, the RCPT method is defined as 1, and the RCM method is defined as 2. Considering the different types and sizes of rubber, this study distinguishes between rubber by size under different measurement methods. The RCPT literature method includes two rubber sizes, 0–1 mm and 1–3 mm. Therefore, the two rubbers were recorded as 1 and 2 by size. The RCM method literature includes five types of rubber with dimensions of 0.063–0.6 mm, 0.25 mm, 0.6–0.7 mm, 1–2 mm, and 4–10 mm. Therefore, the five types of rubber are noted as 1, 2, 3, 4, and 5 by size. The RC samples selected for this study met the 28-day curing period. Cement substitution materials were not used as an input variable in this study, as they are rarely added in the published literature. Figure 1 represents the hotspot plot of the correlation coefficient between the variables. The correlation coefficients between the variables are all less than 0.8, as seen from the graph. Some studies suggest that the correlation between variables should be less than 0.8 to reduce the effect of multiple collinearities [60,61]. Figure 2 represents the input and output parameters’ frequency distribution histogram. The statistical analysis of each variable is shown in Table 1. Stdd denotes overall sample bias, and Stde indicates sample bias.

3. Method

3.1. Whale Optimization Algorithm

Humpback whales are herd animals because they can only hunt small fish and prawns. They have developed a unique way of hunting known as bubble net hunting, where the term WOA comes from [46]. The algorithm is divided into three main parts: encirclement predation, prey predation, and prey search. The specific process of the WOA is as follows:

3.1.1. Encirclement Predation

In this process, humpback whales randomly search for prey based on the positions of each other in the population. Since the location of the best target has yet to be discovered in the search space, the WOA assumes that the location of the best target is within the search range. Once the position of the best target is determined, other populations will approach the best target and update their positions. The mathematical expression for this process is as follows [62]:
X ( v + 1 ) = X * ( v ) A · D
D = | C X * ( v ) X ( v ) |
where v indicates the number of iterations; A and C are vector coefficients; X * ( v ) is the location of the best target; X(v) is the current location; D is the process quantity; the expressions for the calculation of A, C are as follows [62]:
A = 2 a r 1 a
C = 2 r 2  
a = 2 2 v V m a x  
where v indicates the number of iterations; V m a x indicates the maximum number of iterations; r 1 and r 2 are both random numbers belonging to the range [0, 1]; a decrease gradually from 2 to 0.

3.1.2. Prey Predation

The bubble net foraging method is a unique hunting method for humpback whales. The WOA is a simulation of the spiral bubble net foraging strategy for optimization. A total of two methods were designed to simulate this behavior:
(1) Shrinkage envelope mechanism: This is achieved by reducing the value of a in Equation (3). Other targets will move closer to the best target when the best target is identified. The current position (X, Y) is gradually contracted to the optimal target position ( X , Y ) .
(2) Spiral update mechanism: The distance between any whale ( X , Y ) and the optimal target position ( X , Y ) is first calculated, then spiral update equations are created to simulate the whale’s hunting motion. The main expressions are as follows [62]:
D = | X * ( v ) X ( v ) |
X ( v + 1 ) = D e b l c o s ( 2 π l ) + X * ( v )  
where b is a constant and indicates the parameter for the shape of the spiral; l denotes a random number between [–1, 1]; D suggests the distance between the best target and any whale.
These two mechanisms co-occur, with a probability of 50% each. The expression of the equation is as follows [62]:
X ( v + 1 ) = { X * ( v ) A D                                       P < 0.5 X * ( v ) + D c o s ( 2 π l ) e b l               P 0.5

3.1.3. Prey Search

Whale populations randomly search for prey, and the mathematical expression for this process is as follows [62]:
D r a n d = | C X r a n d ( v ) X ( v )   |  
X ( v + 1 ) = X r a n d ( v ) A D r a n d  
where X r a n d ( v ) indicates a randomly selected location in the current population of whales.
The WOA, like other intelligent algorithms, suffers from the problem of falling into local extremum. Therefore, improvements to the WOA are needed.

3.2. Improved Whale Optimization Algorithm

3.2.1. Tent Chaotic Mapping Initializes Populations

Chaos is a complex, non-linear state that exhibits irregularity and randomness [63]. Therefore, chaotic mapping can be used to improve the algorithm’s performance. The two commonly used chaotic mapping sequence models are Logistic and Tent. Compared to logistic mappings, Tent mappings have a more uniform distribution, allowing the algorithm to have a wider search range [50]. Therefore, Tent chaotic mapping is used to initialize the population in this study. The expressions are as follows [51]:
x n + 1 = { 2 x n ,                 0 x n 0.5 2 ( 1 x n ) ,     0.5 x n 1
The expression after the Bernoulli displacement transformation is as follows [51]:
x n = 2 ( x n ) m o d 1

3.2.2. Adaptive Adjustment of Weight

The inertia weight is a crucial parameter in the WOA. Appropriate weight values can improve the algorithm’s performance, since the original WOA did not consider that the prey would guide the whale for position updates during the iterative process. Therefore, an adaptive weight formula is established in this paper. The specific expression is as follows [52]:
w = d 1 ( P w o r s t P b e s t ) + d 2 ( x i u p p e r x i l o w e r ) / t
where t indicates the current number of iterations; x i u p p e r and x i l o w e r denote the upper and lower bounds of x i respectively; d 1 and d 2   represent   constants ; P w o r s t and P b e s t denote the worst and best positions of the current population respectively. Thus Equations (1) and (7) can be improved as:
X ( v + 1 ) = w X * ( v ) A · D
X ( v + 1 ) = D e b l c o s ( 2 π l ) + w X * ( v )
With the introduction of an adaptive adjustment weights strategy, the algorithm can adaptively change the weights’ size according to the whale population’s current distribution. At the beginning of the algorithm iteration, if the whale population falls into the local optimum and the difference between the optimum and the worst solution is not significant, the value of d 2 ( x i u p p e r x i l o w e r ) /t is not affected by the population distribution. At this point, obtaining a large value of weights is still possible and avoids the algorithm falling into a smaller search range at the beginning of the iteration. As the whale population iterations increase, the value of d 2 ( x i u p p e r x i l o w e r ) /t decreases, and the effect on the weights decreases. If the algorithm does not obtain an optimal solution, d 1 ( P i w o r s t P i b e s t ) can play a dominant role in the weight and can make the algorithm find the optimal solution in larger steps. The adjustment of these two components makes the inertia weights highly adaptive and strengthens the algorithm’s optimization search capability.

3.2.3. Adaptive Adjustment of the Search Strategy

To prevent the algorithm from falling into the local optimum, a Probability threshold value Q is introduced to update the expression of the random search. The expression for Q is as follows [52]:
Q = | f ¯ f m i n | | f m a x f m i n |
where f ¯ indicates the average fitness of the current population; f m i n indicates the current best fit value; f m a x indicates the current worst fit value; for each whale, a q [ 0 , 1 ] is compared with Q value. If q < Q, the randomly selected individual whale updates its position according to Equation (17), and the other individual whales remain unchanged [52]. Otherwise, other individuals update their position according to Equation (10). This allows the algorithm to generate a set of random solutions globally with a greater probability in the early iterations, reducing the likelihood of population diversity decline and enhancing the global search capability of the algorithm.
X r a n d ( v ) = X m i n + r ( X m a x X m i n )
where r is a random number between [0, 1]; X m i n and X r a n d are the maximum and minimum values of X r a n d respectively.

3.2.4. Adaptive t-Distribution Dimension-by-Dimensional Variation Strategy

Population diversity declines in later iterations of the WOA. This leads to the algorithm being prone to fall into the local optimum. Therefore, this study introduces an adaptive t-distribution dimension-by-dimensional variation strategy to perturb the individuals with optimal fitness and improve the ability of the algorithm to jump out of the local optimum. Depending on the size of the degree of freedom n, the t-distribution curves show different patterns. When t ( n ) N ( 0 ,   1 ) , t ( n = 1 ) = C ( 0 ,   1 ) , where N (0, 1) is a Gaussian distribution and C (0, 1) is a Cauchy distribution. This shows that the two boundary special cases of the t-distribution are the Gaussian and the Cauchy distributions [64]. The dimension-by-dimension variation is calculated as follows [53]:
X n e w d = X P e s t d + X P e s t d × t ( i t e r )
where i t e r indicates the current number of iterations; t ( i t e r ) denotes t-distribution with the degree of freedom parameter t. The flow chart for the MWOA is shown in Figure 3.

3.3. Machine Learning Models

3.3.1. Extreme Learning Machine

The Extreme Learning Machine is a new neural network learning algorithm proposed by Professor Guangbin Huang in 2004 [65]. The Extreme learning machine also evolved from the feedforward neural network, which can randomize the input weights, bias, and the number of hidden layer neurons and then obtain the output weights by least squares without the need for the entire iteration of the network [34,35]. ELM is widely used in various fields such as pattern recognition, image processing, signal processing, combinatorial optimization, and prediction [66,67,68,69]. The structure of the ELM is shown in Figure 4. The primary calculation process for ELM is as follows:
For N arbitrary samples ( x i , t i ) , where x i = [ x i 1 , x i 2 , x i 3 , , x i n ] T R n , t i = [ t i 1 , t i 2 , t i 3 , , t i m ] T R m . Assume that the number of hidden layer neurons is N ˜ , and the activation function is h ( x ) , the standard single hidden layer feedforward neural network (SLFN) expression is as follows [70]:
i = 1 N ˜ β i h ( k i x j + b i ) = y j ,   j = 1 , N
where k i = [ k i 1 , k i 2 , k i 3 , , k i n ] T denote the vector of weights connecting the ith hidden neuron to the input neuron;   b i denotes the bias of the ith neuron; β i = [ β i 1 , β i 2 , β i 3 , , β i n ] T denotes the weights connecting the ith hidden layer neuron to the output neuron. k i x j denote the inner product of k i and x j ; The activation function is usually Sigmoid, RBF or Sine, and in this study the activation function is Sigmoid.
A standard SLFN with N ˜ hidden layer neurons and activation function h ( x ) can approach this N samples with zero error, where j = 1 N ˜ | | y j t j | | = 0 . Thus, the following expression exists [70]:
i = 1 N ˜ β i h ( k i x j + b i ) = t j ,   j = 1 , N
The above N equations can be written as [70]:
H β = T
Where H ( k 1 , k 2 , k 3 , k N ˜ , b 1 , b 2 , b 3 , , b N ˜ , x 1 , x 2 , x 3 , , x n )
= [ h ( k 1 x 1 + b 1 ) h ( k N ˇ x 1 + b N ˜ ) h ( k 1 x N + b 1 ) h ( k N ˜ x N + b N ˜ ) ] N × N ˜ ,   β = [ β 1 T β 3 T ] N ˜ × m ,   T = [ t 1 T t N T ] N × m
.
H is called the hidden layer output matrix of the neural network [71,72]. The ith column of H is the output vector of the ith hidden neuron concerning the input x 1 , x 2 , x 3 , , x n .
When the input layer weights and the hidden layer bias are determined, the hidden layer output matrix H can be obtained by following the input samples. So, the final conversion is to find the least squares solution for H β = T [70]:
| | H ( K 1 , , K N ˜ , b 1 , , b N ˜ ) β ^ T | | = min β H ( K 1 , , K N ˜ , b 1 , , b N ˜ ) β T
the least squares solution of Equation (14) is as follows [70]:
β ^ = H T
where: H is the Moore-Penrose generalized inverse matrix of the matrix H [73].
Random input weights and hidden layer bias can lead to problems, such as blind iterations and accuracy degradation [36]. Therefore, this study introduces the MWOA into the ELM model to optimize the input weights and hidden layer bias to improve the model’s accuracy.

3.3.2. Random Forest Model

The RF model is one of the most commonly used regressions and classification models proposed by Leo Breiman in 2001 [74]. The main idea is to train decision trees by taking n samples from the original data set N to form a new training set, and m random forests are created by these n decision trees at random [75,76]. Meanwhile, the predicted value is decided by the voting of these m random forests [77]. A mathematical model can explain the RF regression model, the leading theory being that X is the independent variable (input data) and Y is the dependent variable (output data). Assuming that the distributions of (X, Y) are independent, the randomly generated training set is Q, and the predicted outcome is G(X), the mean squared generalization error is [78]:
E X , Y [ Y G ( X ) ] 2
Assuming that there are h decision trees, the average of the predicted values { G ( Q , X h ) } of the h decision trees is the prediction of the RF regression. If h , then the following equation holds [78]:
E X , Y [ Y G h ¯ ( X , Q h ) ] 2 E X , Y [ Y E Q ( X , Q h ) ] 2
where E X , Y [ Y E Q ( X , Q h ) ] 2 denotes the generalization error, noted as M. When h is infinite, the average generalization error of a single tree is noted as M * . The expression for M * is as follows [78]:
M * = E Q E X , Y [ Y G ( X , Q ) ] 2
where Q satisfies the following expression [78]:
M ρ ¯ M *
where ρ ¯ denotes the residual weighted correlation coefficient. The final RF regression function is as follows [78]:
Y = E Q G ( X , Q )
Since the number of forests and leaves in the RF model significantly impacts the model’s performance, at the same time, they have randomness and limitations. Therefore, this study introduces MWOA to optimize these two parameters. The structure of the RF model is shown in Figure 5.

3.3.3. ELMAN Neural Network

Elman neural network was proposed by ELMAN in 1990 [79]. ELMAN is a multi-layer dynamic recurrent neural network that can approximate nonlinear functions well and is therefore used in many industries [39,80,81]. Like artificial neural networks, ELMAN has an input, hidden, and output layer. The difference is that ELMAN has a unique storage layer. This particular storage layer, which acts as a delay operator, can store the output values of the neurons in the previously hidden layer, giving the network a memory function and improving the net work’s ability to process dynamic information. The structure of ELMAN is shown in Figure 6. The expression for ELMAN at the moment t is as follows [82]:
x j ( k ) = f ( i = 1 n ω 1 i , j u i ( k ) + i = 1 m ω 2 i , j c i ( k ) )
c i ( k ) = x i ( k 1 )
y j ( k ) = g ( i = 1 r ω 3 i , j x i ( k ) )
where ω 1 i , j denotes the weight of node i in the connected input layer and node j in the hidden layer; ω 2 i , j denotes the weight of node i and node j in the connection storage layer; ω 3 i , j denotes the weight connecting node i in the hidden layer and node j in the output layer; x j ( k ) , c i ( k ) and y j ( k ) denote the output vectors of the hidden layer, the storage layer and the output layer respectively. f and g denote the transfer functions of the hidden layer and the output layer, respectively. The transfer function for this study is tanh.
ELMAN calculates the number of hidden layer neurons in the same way as ANN. The main expressions are as follows [31]:
h = m + n + a
where m is the number of nodes in the input layer; n is the number of nodes in the output layer; a ( 1 ,   10 ) .
ELMAN’s predictive performance is influenced by weights and biases. Therefore, the optimal weights and biases are found by optimizing the ELMAN neural network using MWOA.
The flow chart of the research process is shown in Figure 7.

4. Evaluation Indicators for the Three Models

This study uses root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and coefficient of determination (R2) to assess the performance of the model. R2 is the metric used to evaluate the accuracy of the model’s predictions [83,84]. The closer the R2 value is to 1, the closer the MAE is to 0, and the more accurate the model will be. These four evaluation indicators are expressed as follows [85,86]:
R 2 = k = 1 N ( q 0 , k q 0 ¯ ) ( q t , k q t ¯ ) k = 1 N ( q 0 , k q 0 ¯ ) 2 k = 1 N ( q t , k q t ¯ ) 2  
M A E = 1 N ( K = 1 N | q 0 , k q t , k q 0 , k | )  
R M S E = 1 N k = 1 N ( q 0 , k q t , k ) 2  
M A P E = 100 % N k = 1 N | q t q 0 q 0 |
where N indicates the number of samples; q0 indicates the actual value; q 0 ¯ indicates the actual average value; q t indicates the output value; q t ¯ indicates the output average value, k = 1:N.

5. Results of the Three Models

The objective of the computational analysis was to predict the D C I of the RC using three ML models (MWOA-ELM, MWOA-RF, and MWOA-ELMAN). Therefore, the model optimized by WOA and the conventional model was also built for comparison. A cross-validation operation is also used in the calculation process, and the result is the average of a 5-fold cross-validation. This was done to make the results more realistic and to avoid chance. Therefore, the data set is divided into five groups by a 5-fold cross-validation operation. For each training session, one set was used as the testing set and the remaining four sets were used as the training set. This resulted in three 70 training sets, 18 test sets, two 71 training sets, and 17 test sets of data sets. Consistent data for each model during training and testing by programming. The models constructed and the computational results are described in detail in the following subsections.

5.1. MWOA-ELM Model Result

Like neural network models, ELM models need to determine the number of hidden layer neurons. In this study, the number of neurons in the hidden layer of the ELM model was calculated by the corresponding program and determined by the trial-and-error method to be 28. The parameter settings for the MWOA-ELM model in this study are shown in Table 2. The parameter settings for the WOA-ELM model are the same as the MWOA-ELM model.
The average results of the three ELM models under 5-fold cross-validation are presented in Table 3. It was clear that the MWOA-ELM model performs the best. Its test set R 2 improved from 0.6458 to 0.9971, while the other error metrics RMSE, MAE, and MAPE were all the lowest among the three ELM models. Figure 8a,b represent the Taylor diagrams for the training and testing sets of the three ELM models [87]. As can be seen from the graph, the MWOA was effective, as reflected by the fact that the MWOA-ELM model was closest to the optimal reference point for each indicator.

5.2. MWOA-RF Model Result

Introducing the MWOA is to find the optimal number of forests and leaves for the RF model. The parameter settings for the MWOA-RF model for this study are shown in Table 4. The parameter settings for the WOA-RF model are the same as the MWOA-RF model.
The results of the 5-fold cross-validation of the three random forest models are presented in Table 5. On the test set, the MWOA-RF model had the highest R2 of 0.9341 and the lowest values of 1.0164, 0.6533, and 0.0962 for RMSE, MAE, and MAPE, respectively. Figure 9a,b show the Taylor diagrams for the three RF models on the training and testing sets. It was clear that the MSSA-RF model was closest to the optimal reference point among the three evaluation indicators of the Taylor diagram. Therefore, MWOA is effectively increased the probability of the RF model finding the optimal number of forests and leaves.

5.3. MWOA-ELMAN Model Result

A three-layer feed-forward MWOA-ELMAN model was established, and the optimal number of neurons was obtained by Equation (32) and trial-and-error method as 13. The MWOA-ELMAN model parameter settings for this study are shown in Table 6. The parameter settings for the WOA-ELMAN model are the same as the MWOA-ELMAN model.
The 5-fold cross-validation results for the three ELMAN models are presented in Table 7. Similar to the pattern of the first two models, the MWOA-ELMAN model has the best results. Figure 10 represents the Taylor diagrams for the training and testing sets. From Figure 10, the results of the MWOA-ELMAN model were closest to the optimal reference point, indicating that the algorithm is effective in optimizing the weights and improving the model’s prediction accuracy.

6. Discussion

6.1. Comparative Analysis of the Three Models

In Section 5, MWOA is proven to improve the generalization of the three models ELM, RF, and ELMAN. It is also possible to prove that the three optimization models are the most exact (it has been shown in the literature that a model is highly accurate when its R2 is more significant than 0.9 [88]). This indicates that ML techniques can meet the prediction accuracy. However, it is necessary to perform a comparative analysis to obtain the optimal model. Figure 11 represents the metric radar plots for the training and testing sets of the MWOA-ELM, MWOA-RF, and MWOA-ELMAN models. The figure shows that the MWOA-ELM model outperforms the other two models on the training and testing sets, with the highest R2 and lowest RMSE, MAE and MAPE. Figure 12 represents the Taylor diagrams for the training and testing sets of the three models. The MWOA-ELM model performs the best, with the lowest error metric on the Taylor diagram, while being closest to the best reference point. Table 8 shows the average of the 5-fold cross-validation results for each of the three models. MWOA-ELM model outperformed the training process during testing. In contrast, the prediction accuracy of both the MWOA-RF and MWOA-ELMAN models decreased, with the MWOAA-RF model decreasing the most (by about 5.6%), indicating that MWOA-ELM is very stable. Figure 13 represents box plots of the training and testing sets for the three models. The bars indicate the mean value of each model. The training and prediction results of the models can be seen more visually in the box plots, where the MWOA-ELM model is not necessarily the best on the training set but has the lowest mean value. In the testing set, it is the best performer and relatively stable, with all R2 around 0.99. This suggests that the MWOA-ELM is the best. It can also demonstrate that with the introduction of cross-validation, the model is very realistic in its calculations and avoids chance.

6.2. Sensitivity Analysis

Sensitivity factor analysis (SA) is an effective method for measuring the influence of model input parameters on output parameters. Sensitivity factor analysis provides feedback on the importance of the model input parameters. Therefore, this study uses the cosine amplitude method (CAM) [89] to perform sensitivity factor analysis on three models and an experimental model. The expressions are as follows:
R i j = k = 1 n x i k x j k k = 1 n x i k 2 k = 1 n x j k 2  
where xi denotes input parameters; xj denotes output parameters; n denotes the number of data; Rij denotes the strength of the relationship.
Figure 14 represents the strength factor of the relationship between each variable and DCI. It can be seen that the three models show similar sensitivity to the experimental model, justifying the developed model. As seen from the graph, the measurement method has the most significant effect on the DCI of RC, followed by FA, while the impact of WR is the least.

6.3. Prediction of Typical Machine Learning Model

In Section 6.1, the MWOA-ELM model was proven to be the best model. Therefore, the typical predictions from the MWOA-ELM model are shown in this section. Figure 15 depicts the regression results for the training and testing set. It is important to emphasize that the MWOA-ELM model has strong predictive power. Its indicator values for the training and testing set were R 2 = 0.9928 , RMSE = 0.3243, MAE = 0.2219, MAPE = 0.0287 and R 2 = 0.9987 , RMSE = 0.1336, MAE = 0.0979, MAPE = 0.0187. Figure 16 shows the predicted values of the MWOA-ELM model compared to the actual values, with the error values included. The comparison results show that the predicted values of the DCI of RC are consistent with the experimental values. It is worth noting that the error between the training set and the testing set is small, which indicates that the MWOA-ELM model can predict the DCI of RC well. The above results suggest that predicting the DCI of RC using the MWOA-ELM model is feasible. This may contribute to developing a numerical tool for determining durability indicators for RC. The application of intelligent algorithms is equally effective. In the future, consider increasing the amount of data and input variables, which would improve the ability of the MWOA-ELM model to predict the DCI of RC. The weight matrix for the MWOA-ELM model’s typical prediction result is shown in Appendix A.

6.4. Forecast Comparison

The overall results of the representative predictions of the MWOA-ELM model are shown in Section 6.3. However, it is necessary to show the forecasting results within the model. This provides a more intuitive view of the model’s predictions. Therefore, this section shows the prediction results of the MWOA-ELM model under different methods separately. Figure 17 represents the prediction result of the MWOA-ELM model for the literature [54,55]. The model’s predictions can be judged from Figure 17, in general agreement with the experimental results. Figure 18 represents the prediction results of the model for the literature [57,58,59]. It can be seen from Figure 18 that the model predicts better results for the literature [58,59]. The predicted curves for the literature [57], showed some deviations, but the overall trend was consistent. However, the errors are acceptable in terms of the overall results of Section 6.3. Figure 19 represents the prediction results of the model for the literature [56]. From Figure 19, it can be observed that the prediction curves both show some deviations. This may be due to the algorithm falling into the local optimum when optimizing this part of the data, making the model learn insufficiently. However, in general, the errors are still acceptable. Figure 20 reflects the predictions of the literature [22]. From Figure 20, the prediction trends are consistent and the errors are relatively small. The above conclusions indicate that using the MWOA-ELM model predicting the D C I of RC is feasible. The model can still be further optimized, which includes developing more powerful algorithms and increasing the amount of data.

6.5. Comparative Analysis with Other Models

To further verify the MWOA-ELM model’s reliability, this study introduces a multiple linear regression model for comparison [90]. Since the MRL model is similar to the ML model in that it also studies the effects of multi-factor interactions, it is compared with the MWOA-ELM model. Figure 21 represents the regression analysis results of the MRL model. The results of the evaluation indicators for the MWOA-ELM (Mean of overall model results under five-fold cross-validation) model and the MRL model are shown in Table 9. Obviously, MWOA-ELM is superior to the MRL model.
Comparison with other models is equally necessary. Ye [91] developed a mathematical model to predict the DCI of RC, because the input variables for the mathematical model used are the water-cement ratio, the rubber admixture, and the rubber size. Therefore, the input variables of the MWOA-ELM model are replaced in the same way. Keeping the same input variables is better for comparison. The data were obtained from three randomly selected papers to avoid complex calculation [22,54,55]. Figure 22 represents the results of the regression analysis for the two models. The results of the evaluation indicators for the two models are shown in Table 10. From Figure 22, the regression analysis result of the MWOA-ELM model is better, with the R2 of 0.991 higher than the mathematical model of 0.8053. Similar results are seen in the other error evaluation indicators in Table 9. This indicates that the MWOA-ELM model has better prediction and generalization ability.

7. Conclusions and Future Prospect

This study used ML techniques to predict the DCI of RC. Three models, ELM, RF, and ELMAN were developed to investigate. The established MWOA was also used to optimize the three models. Four metrics, RMSE, MAE, and MAPE, were used to evaluate the performance of the models. According to the prediction results, the MWOA-optimized ELM, RF, and ELMAN models successfully predicted the DCI of RC. At the same time, they had the highest R2 and lowest errors compared to the unoptimized models. This indicates that the algorithm established is valid. Comparing the three optimal models, the MWOA-ELM model performs the best. The three models were shown to have similar sensitivity to the experimental model by the CAM method. This justifies the developed models. Comparing the typical prediction results of the MWOA-ELM model with the actual values shows that the prediction results generally agree with the experiment, while the error is within a reasonable range. Comparison with the MRL and published mathematical model show that the MWOA-ELM model performs the best. This suggests that the MWOA-ELM model can accurately predict the DCI of RC.
In summary, this study successfully used ML techniques to predict the DCI of RC while demonstrating the proposed MWOA is valid. This provides a new option for determining the DCI of RC. However, observation of the results revealed that the proposed method could be further optimized to better understand RC’s chloride permeation process. This includes developing more robust algorithms, increasing the data set, adding input variables, and enhancing the interpretable analysis of the model.

Author Contributions

Conceptualization, X.H. and K.W.; methodology, X.H.; software, X.H.; validation, X.H. and K.W.; formal analysis, X.H.; investigation, X.H. and W.D.; resources, X.H. and W.D. data curation, X.H. and W.D.; writing—original draft preparation, X.H.; writing—review and editing, H.L.; supervision, H.L., S.W. and T.L.; project administration, H.L.; funding acquisition, T.L. and S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Wuhan Construction Engineering Group Co., Ltd. 2022 Annual Research Project, Grant number 3; Wuhan Urban Construction Group Co., Ltd. 2022 Annual Research Project, Grant number 5; 2022 Hubei Construction Science and Technology Program Project, Grant number 90, and 2021 Hubei Construction Science and Technology Program Project, Grant number 43.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in the article can be obtained from the author here.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

MWOA-ELM Model:
  IW1=
Polymers 15 00308 i001
  B1=
[ 3.11 E 05   7.13 E 06   1.01 E 04   8.49 E 05   2.3 E 04   9.25 E 05   1.48 E 04   1.28 E 04   1.44 E 04   1.19 E 04   2.24 E 04  
1.95 E 04   3.91 E 11   1.62 E 04   2.03 E 0 4 2.39 E 04   6.9 E 06   7.85 E 05   2.04 E 04   2.79 E 07   5.29 E 09   1.3 E 04  
1.04 E 04   1.97 E 04   3.65 E 05   2.34 E 05   9.78 E 05   1.09 E 04 ]T
  LW1=
[ 1.67 E + 11   7.39 E +11 3.52 E + 11   0.37 E + 11   1.95 E + 11   5.82 E + 11   1.71 E + 11   2.19 E + 11   1.96 E + 11   6.51 E + 11   4.07 E + 11   1.30 E + 11  
1.13 E + 11   2.14 E + 11   1.68 E + 11   1.61 E + 11   2.45 E + 11   3.63 E + 11   6.37 E + 11   3.95 E + 11   4.37 E + 11   8.69 E + 11   1.1 E + 11  
3.79 E + 11   1.42 E + 11   8.13 E + 11   6.08 E + 11   9.04 E + 11 ]T

References

  1. Asutkar, P.; Shinde, S.; Patel, R. Study on the behaviour of rubber aggregates concrete beams using analytical approach. Eng. Sci. Technol. Int. J. 2017, 20, 151–159. [Google Scholar] [CrossRef] [Green Version]
  2. Sun, Y.; Li, G.; Zhang, J.; Qian, D. Prediction of the strength of rubberized concrete by an evolved random forest model. Adv. Civ. Eng. 2019, 2019, 5198583. [Google Scholar] [CrossRef] [Green Version]
  3. Sofi, A. Effect of waste tyre rubber on mechanical and durability properties of concrete—A review. Ain Shams Eng. J. 2018, 9, 2691–2700. [Google Scholar] [CrossRef]
  4. Toutanji, H.A. The use of rubber tire particles in concrete to replace mineral aggregates. Cem. Concr. Compos. 1996, 18, 135–139. [Google Scholar] [CrossRef]
  5. Skripkiūnas, G.; Grinys, A.; Černius, B. Deformation properties of concrete with rubber waste additives. Mater. Sci. 2007, 13, 219–223. [Google Scholar]
  6. Batayneh, M.K.; Marie, I.; Asi, I. Promoting the use of crumb rubber concrete in developing countries. Waste Manag. 2008, 28, 2171–2176. [Google Scholar] [CrossRef]
  7. Ganjian, E.; Khorami, M.; Maghsoudi, A.A. Scrap-tyre-rubber replacement for aggregate and filler in concrete. Constr. Build. Mater. 2009, 23, 1828–1836. [Google Scholar] [CrossRef]
  8. Mohammed, B.S.; Azmi, N. Strength reduction factors for structural rubbercrete. Front. Struct. Civ. Eng. 2014, 8, 270–281. [Google Scholar] [CrossRef]
  9. El-Khoja, A.; Ashour, A.; Abdalhmid, J.; Dai, X.; Khan, A. Prediction of rubberised concrete strength by using artificial neural networks. Int. J. Struct. Constr. Eng. 2018, 12, 1068–1073. [Google Scholar]
  10. Hadzima-Nyarko, M.; Nyarko, E.K.; Ademović, N.; Miličević, I.; Kalman Šipoš, T. Modelling the influence of waste rubber on compressive strength of concrete by artificial neural networks. Materials 2019, 12, 561. [Google Scholar] [CrossRef] [Green Version]
  11. Pradhan, B. Corrosion behavior of steel reinforcement in concrete exposed to composite chloride–sulfate environment. Constr. Build. Mater. 2014, 72, 398–410. [Google Scholar] [CrossRef]
  12. Yu, Z.; Chen, Y.; Liu, P.; Wang, W. Accelerated simulation of chloride ingress into concrete under drying–wetting alternation condition chloride environment. Constr. Build. Mater. 2015, 93, 205–213. [Google Scholar] [CrossRef]
  13. Andisheh, K.; Scott, A.; Palermo, A.; Clucas, D. Influence of chloride corrosion on the effective mechanical properties of steel reinforcement. Struct. Infrastruct. Eng. 2019, 15, 1036–1048. [Google Scholar] [CrossRef]
  14. Lee, H.-S.; Cho, Y.-S. Evaluation of the mechanical properties of steel reinforcement embedded in concrete specimen as a function of the degree of reinforcement corrosion. Int. J. Fract. 2009, 157, 81–88. [Google Scholar] [CrossRef]
  15. Song, H.-W.; Shim, H.-B.; Petcherdchoo, A.; Park, S.-K. Service life prediction of repaired concrete structures under chloride environment using finite difference method. Cem. Concr. Compos. 2009, 31, 120–127. [Google Scholar] [CrossRef]
  16. Lu, C.; Yuan, S.; Cheng, P.; Liu, R. Mechanical properties of corroded steel bars in pre-cracked concrete suffering from chloride attack. Constr. Build. Mater. 2016, 123, 649–660. [Google Scholar] [CrossRef]
  17. Jung, J.-S.; Lee, B.Y.; Lee, K.-S. Experimental study on the structural performance degradation of corrosion-damaged reinforced concrete beams. Adv. Civ. Eng. 2019, 2019, 9562574. [Google Scholar] [CrossRef]
  18. Thomas, B.S.; Gupta, R.C.; Kalla, P.; Cseteneyi, L. Strength, abrasion and permeation characteristics of cement concrete containing discarded rubber fine aggregates. Constr. Build. Mater. 2014, 59, 204–212. [Google Scholar] [CrossRef]
  19. Su, H.; Yang, J.; Ling, T.-C.; Ghataora, G.S.; Dirar, S. Properties of concrete prepared with waste tyre rubber particles of uniform and varying sizes. J. Clean. Prod. 2015, 91, 288–296. [Google Scholar] [CrossRef] [Green Version]
  20. Thomas, B.S.; Gupta, R.C. A comprehensive review on the applications of waste tire rubber in cement concrete. Renew. Sustain. Energy Rev. 2016, 54, 1323–1333. [Google Scholar] [CrossRef]
  21. Mao, L.-x.; Hu, Z.; Xia, J.; Feng, G.-l.; Azim, I.; Yang, J.; Liu, Q.-f. Multi-phase modelling of electrochemical rehabilitation for ASR and chloride affected concrete composites. Compos. Struct. 2019, 207, 176–189. [Google Scholar] [CrossRef]
  22. Gupta, T.; Siddique, S.; Sharma, R.K.; Chaudhary, S. Behaviour of waste rubber powder and hybrid rubber concrete in aggressive environment. Constr. Build. Mater. 2019, 217, 283–291. [Google Scholar] [CrossRef]
  23. Beushausen, H.; Torrent, R.; Alexander, M.G. Performance-based approaches for concrete durability: State of the art and future research needs. Cem. Concr. Res. 2019, 119, 11–20. [Google Scholar] [CrossRef]
  24. Dierkens, M.; Godart, B.; Mai-Nhu, J.; Rougeau, P.; Linger, L.; Cussigh, F. In French national project ‘PERFDUB’ on performance-based approach: Interest of old structures analysis for the definition of durability indicators criteria. In Proceedings of the 16th fib Symposium, Concrete Innovations in Materials, Design and Structures, Krakow, Poland, 27–29 May 2019; Fédération de l’Industrie du Béton-FIB: Montrouge, France, 2019; p. 8. [Google Scholar]
  25. Tran, V.Q. Machine learning approach for investigating chloride diffusion coefficient of concrete containing supplementary cementitious materials. Constr. Build. Mater. 2022, 328, 127103. [Google Scholar] [CrossRef]
  26. Saeki, T.; Sasaki, K.; Shinada, K. Estimation of chloride diffusion coefficent of concrete using mineral admixtures. J. Adv. Concr. Technol. 2006, 4, 385–394. [Google Scholar] [CrossRef] [Green Version]
  27. Jasielec, J.J.; Stec, J.; Szyszkiewicz-Warzecha, K.; Łagosz, A.; Deja, J.; Lewenstam, A.; Filipek, R. Effective and apparent diffusion coefficients of chloride ions and chloride binding kinetics parameters in mortars: Non-stationary diffusion–reaction model and the inverse problem. Materials 2020, 13, 5522. [Google Scholar] [CrossRef]
  28. Liu, Q.-f.; Iqbal, M.F.; Yang, J.; Lu, X.-y.; Zhang, P.; Rauf, M. Prediction of chloride diffusivity in concrete using artificial neural network: Modelling and performance evaluation. Constr. Build. Mater. 2021, 268, 121082. [Google Scholar] [CrossRef]
  29. Van Noort, R.; Hunger, M.; Spiesz, P. Long-term chloride migration coefficient in slag cement-based concrete and resistivity as an alternative test method. Constr. Build. Mater. 2016, 115, 746–759. [Google Scholar] [CrossRef]
  30. Wang, H.-L.; Dai, J.-G.; Sun, X.-Y.; Zhang, X.-L. Time-dependent and stress-dependent chloride diffusivity of concrete subjected to sustained compressive loading. J. Mater. Civ. Eng. 2016, 28, 04016059. [Google Scholar] [CrossRef]
  31. Huang, X.-Y.; Wu, K.-Y.; Wang, S.; Lu, T.; Lu, Y.-F.; Deng, W.-C.; Li, H.-M. Compressive Strength Prediction of Rubber Concrete Based on Artificial Neural Network Model with Hybrid Particle Swarm Optimization Algorithm. Materials 2022, 15, 3934. [Google Scholar] [CrossRef]
  32. Gupta, T.; Patel, K.; Siddique, S.; Sharma, R.K.; Chaudhary, S. Prediction of mechanical properties of rubberised concrete exposed to elevated temperature using ANN. Measurement 2019, 147, 106870. [Google Scholar] [CrossRef]
  33. Zhang, J.; Zhang, M.; Dong, B.; Ma, H. Quantitative evaluation of steel corrosion induced deterioration in rubber concrete by integrating ultrasonic testing, machine learning and mesoscale simulation. Cem. Concr. Compos. 2022, 128, 104426. [Google Scholar] [CrossRef]
  34. Huang, G.-B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B 2011, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  35. Ding, S.; Zhao, H.; Zhang, Y.; Xu, X.; Nie, R. Extreme learning machine: Algorithm, theory and applications. Artif. Intell. Rev. 2015, 44, 103–115. [Google Scholar] [CrossRef]
  36. Han, F.; Yao, H.-F.; Ling, Q.-H. An improved evolutionary extreme learning machine based on particle swarm optimization. Neurocomputing 2013, 116, 87–93. [Google Scholar] [CrossRef]
  37. Li, C.; Tao, Y.; Ao, W.; Yang, S.; Bai, Y. Improving forecasting accuracy of daily enterprise electricity consumption using a random forest based on ensemble empirical mode decomposition. Energy 2018, 165, 1220–1227. [Google Scholar] [CrossRef]
  38. Fan, G.-F.; Yu, M.; Dong, S.-Q.; Yeh, Y.-H.; Hong, W.-C. Forecasting short-term electricity load using hybrid support vector regression with grey catastrophe and random forest modeling. Util. Policy 2021, 73, 101294. [Google Scholar] [CrossRef]
  39. Cai, C.; Qian, Q.; Fu, Y. Application of BAS-Elman neural network in prediction of blasting vibration velocity. Procedia Comput. Sci. 2020, 166, 491–495. [Google Scholar] [CrossRef]
  40. Liu, B.; Zhao, Y.; Wang, W.; Liu, B. Compaction density evaluation model of sand-gravel dam based on Elman neural network with modified particle swarm optimization. Front. Phys. 2022, 9, 806231. [Google Scholar] [CrossRef]
  41. Kang, F.; Liu, J.; Li, J.; Li, S. Concrete dam deformation prediction model for health monitoring based on extreme learning machine. Struct. Control Health Monit. 2017, 24, e1997. [Google Scholar] [CrossRef]
  42. Falah, M.W.; Hussein, S.H.; Saad, M.A.; Ali, Z.H.; Tran, T.H.; Ghoniem, R.M.; Ewees, A.A. Compressive Strength Prediction Using Coupled Deep Learning Model with Extreme Gradient Boosting Algorithm: Environmentally Friendly Concrete Incorporating Recycled Aggregate. Complexity 2022, 2022, 5433474. [Google Scholar] [CrossRef]
  43. Dong, L.; Shu, W.; Sun, D.; Li, X.; Zhang, L. Pre-alarm system based on real-time monitoring and numerical simulation using internet of things and cloud computing for tailings dam in mines. IEEE Access 2017, 5, 21080–21089. [Google Scholar] [CrossRef]
  44. Hai-Bang, L.; Thuy-Anh, N.; Hai-Van Thi, M.; Van Quan, T. Development of deep neural network model to predict the compressive strength of rubber concrete. Constr. Build. Mater. 2021, 301, 124081. [Google Scholar] [CrossRef]
  45. Liu, H.; Mi, X.-w.; Li, Y.-f. Wind speed forecasting method based on deep learning strategy using empirical wavelet transform, long short term memory neural network and Elman neural network. Energy Convers. Manag. 2018, 156, 498–514. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  47. Rathore, N.S.; Singh, V. Whale optimisation algorithm-based controller design for reverse osmosis desalination plants. Int. J. Intell. Eng. Inform. 2019, 7, 77–88. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Mirjalili, S.M.; Saremi, S.; Mirjalili, S. Whale optimization algorithm: Theory, literature review, and application in designing photonic crystal filters. In Nature-Inspired Optimizers; Springer: Cham, Switzerland, 2020; pp. 219–238. [Google Scholar]
  49. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Enhanced whale optimization algorithm for maximum power point tracking of variable-speed wind generators. Appl. Soft Comput. 2020, 86, 105937. [Google Scholar] [CrossRef]
  50. Teng, Z.; Lv, J.; Guo, L.; Yuanyuan, X. An improved hybrid grey wolf optimization algorithm based on Tent mapping. J. Harbin Inst. Technol. 2018, 50, 40–49. [Google Scholar]
  51. Hang, X.; Zhang, D.; Wang, Y.; Song, T.; Fan, Y. Hybrid strategy to improve whale optimization algorithm. Comput. Eng. Des. 2020, 41, 3397–3404. [Google Scholar] [CrossRef]
  52. Kong, Z.; Yang, Q.-f.; Zhao, J.; Xiong, J.-j. Adaptive adjustment of weights and search strategies-based whale optimization algorithm. J. Northeast. Univ. 2020, 41, 35. [Google Scholar]
  53. Zhang, W.; Liu, S.; Ren, C. Mixed Strategy Improved Sparrow Search Algorithm. Comput. Eng. Appl. 2021, 57, 74–82. [Google Scholar]
  54. Gupta, T.; Chaudhary, S.; Sharma, R.K. Assessment of mechanical and durability properties of concrete containing waste rubber tire as fine aggregate. Constr. Build. Mater. 2014, 73, 562–574. [Google Scholar] [CrossRef]
  55. Noor, N.M.; Yamamoto, D.; Hamada, H.; Sagawa, Y. Study on Chloride Ion Penetration Resistance of Rubberized Concrete Under Steady State Condition. MATEC Web Conf. 2016, 47, 01004. [Google Scholar] [CrossRef] [Green Version]
  56. Ding, X.C. Study on Durability of Waste Rubber Cement Mortar; Henan Polytechnic University: Jiaozuo, China, 2018. [Google Scholar]
  57. Amiri, M.; Hatami, F.; Golafshani, E.M. Evaluating the synergic effect of waste rubber powder and recycled concrete aggregate on mechanical properties and durability of concrete. Case Stud. Constr. Mater. 2021, 15, e00639. [Google Scholar] [CrossRef]
  58. Han, Q.; Wang, N.; Zhang, J.; Yu, J.; Hou, D.; Dong, B. Experimental and computational study on chloride ion transport and corrosion inhibition mechanism of rubber concrete. Constr. Build. Mater. 2021, 268, 121105. [Google Scholar] [CrossRef]
  59. Nadi, S.; Beheshti Nezhad, H.; Sadeghi, A. Experimental study on the durability and mechanical properties of concrete with crumb rubber. J. Build. Pathol. Rehabil. 2022, 7, 17. [Google Scholar] [CrossRef]
  60. Smith, G.N. Probability and Statistics in Civil Engineering; Collins Professional and Technical Books: London, UK, 1986; 244p. [Google Scholar]
  61. Dunlop, P.; Smith, S. Estimating key characteristics of the concrete delivery and placement process using linear regression analysis. Civ. Eng. Environ. Syst. 2003, 20, 273–290. [Google Scholar] [CrossRef]
  62. Al-Janabi, T.A.; Al-Raweshidy, H.S. Efficient whale optimisation algorithm-based SDN clustering for IoT focused on node density. In Proceedings of the 2017 16th Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net), Budva, Montenegro, 28–30 June 2017; IEEE: New York, NY, USA, 2017; pp. 1–6. [Google Scholar]
  63. Cai, D.; Ji, X.; Shi, H.; Pan, J. Method for improving piecewise Logistic chaotic map and its performance analysis. J. Nanjing Univ. 2016, 52, 809–815. [Google Scholar]
  64. Zhou, F.-j.; Wang, X.-j.; Zhang, M. Evolutionary programming using mutations based on the t probability distribution. Acta Electonica Sin. 2008, 36, 667. [Google Scholar]
  65. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE international joint conference on neural networks (IEEE Cat. No. 04CH37541), Budapest, Hungary, 25–29 July 2004; IEEE: New York, NY, USA, 2004; pp. 985–990. [Google Scholar] [CrossRef]
  66. Kahramanli, H.; Allahverdi, N. Rule extraction from trained adaptive neural networks using artificial immune systems. Expert Syst. Appl. 2009, 36, 1513–1522. [Google Scholar] [CrossRef]
  67. Zhang, D.; Wang, Y. Rough neural network based on bottom-up fuzzy rough data analysis. Neural Process. Lett. 2009, 30, 187–211. [Google Scholar] [CrossRef]
  68. Ding, S.; Jia, W.; Su, C.; Zhang, L.; Liu, L. Research of neural network algorithm based on factor analysis and cluster analysis. Neural Comput. Appl. 2011, 20, 297–302. [Google Scholar] [CrossRef]
  69. Ding, S.; Xu, L.; Su, C.; Jin, F. An optimizing method of RBF neural network based on genetic algorithm. Neural Comput. Appl. 2012, 21, 333–336. [Google Scholar] [CrossRef]
  70. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  71. Huang, G.-B.; Babri, H.A. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans. Neural Netw. 1998, 9, 224–229. [Google Scholar] [CrossRef] [Green Version]
  72. Huang, G.-B. Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans. Neural Netw. 2003, 14, 274–281. [Google Scholar] [CrossRef]
  73. Liang, N.-Y.; Huang, G.-B.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
  74. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  75. Amato, L.; Minozzi, S.; Mitrova, Z.; Parmelli, E.; Saulle, R.; Cruciani, F.; Vecchi, S.; Davoli, M. Systematic review of safeness and therapeutic efficacy of cannabis in patients with multiple sclerosis, neuropathic pain, and in oncological patients treated with chemotherapy. Epidemiol. Prev. 2017, 41, 279–293. [Google Scholar] [CrossRef]
  76. Hou, K.; Yang, H.; Ye, Z.; Wang, Y.; Liu, L.; Cui, X. Effectiveness of pharmacist-led anticoagulation management on clinical outcomes: A systematic review and meta-analysis. J. Pharm. Pharm. Sci. 2017, 20, 378–396. [Google Scholar] [CrossRef] [Green Version]
  77. Tang, Q.Y.; Zhang, C.X. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research. Insect Sci. 2013, 20, 254–260. [Google Scholar] [CrossRef] [PubMed]
  78. Gao, A. Research on Prediction Model of Tillage DepthBased on an Improved Random Forest; Changchun University of Technology: Changchun, China, 2022. [Google Scholar]
  79. Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
  80. Mehr, A.D.; Vaheddoost, B.; Mohammadi, B. ENN-SA: A novel neuro-annealing model for multi-station drought prediction. Comput. Geosci. 2020, 145, 104622. [Google Scholar] [CrossRef]
  81. Yolcu, O.C.; Temel, F.A.; Kuleyin, A. New hybrid predictive modeling principles for ammonium adsorption: The combination of Response Surface Methodology with feed-forward and Elman-Recurrent Neural Networks. J. Clean. Prod. 2021, 311, 127688. [Google Scholar] [CrossRef]
  82. Cheng, Y.-c.; Qi, W.-M.; Cai, W.-Y. Dynamic properties of Elman and modified Elman neural network. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002; IEEE: New York, NY, USA, 2002; pp. 637–640. [Google Scholar]
  83. Menard, S. Coefficients of determination for multiple logistic regression analysis. Am. Stat. 2000, 54, 17–24. [Google Scholar]
  84. Le, L.M.; Ly, H.-B.; Pham, B.T.; Le, V.M.; Pham, T.A.; Nguyen, D.-H.; Tran, X.-T.; Le, T.-T. Hybrid artificial intelligence approaches for predicting buckling damage of steel columns under axial compression. Materials 2019, 12, 1670. [Google Scholar] [CrossRef] [Green Version]
  85. Ly, H.-B.; Le, L.M.; Phi, L.V.; Phan, V.-H.; Tran, V.Q.; Pham, B.T.; Le, T.-T.; Derrible, S. Development of an AI model to measure traffic air pollution from multisensor and weather data. Sensors 2019, 19, 4941. [Google Scholar] [CrossRef]
  86. Pham, B.T.; Jaafari, A.; Prakash, I.; Bui, D.T. A novel hybrid intelligent model of support vector machines and the MultiBoost ensemble for landslide susceptibility modeling. Bull. Eng. Geol. Environ. 2019, 78, 2865–2886. [Google Scholar] [CrossRef]
  87. Taylor, K.E. Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res. Atmos. 2001, 106, 7183–7192. [Google Scholar] [CrossRef]
  88. Iqbal, M.F.; Liu, Q.-f.; Azim, I.; Zhu, X.; Yang, J.; Javed, M.F.; Rauf, M. Prediction of mechanical properties of green concrete incorporating waste foundry sand based on gene expression programming. J. Hazard. Mater. 2020, 384, 121322. [Google Scholar] [CrossRef]
  89. Jahed Armaghani, D.; Hajihassani, M.; Sohaei, H.; Tonnizam Mohamad, E.; Marto, A.; Motaghedi, H.; Moghaddam, M.R. Neuro-fuzzy technique to predict air-overpressure induced by blasting. Arab. J. Geosci. 2015, 8, 10937–10950. [Google Scholar] [CrossRef]
  90. Hong, F.; Qiao, H.; Wang, P. Predicting the life of BNC-coated reinforced concrete using the Weibull distribution. Emerg. Mater. Res. 2020, 9, 424–434. [Google Scholar] [CrossRef]
  91. Ye, W.C. Experimental Study on the Durability of Rubber Concrete; Shenyang University: Shenyang, China, 2013. [Google Scholar]
Figure 1. Heat map of correlation coefficients for input and output variables.
Figure 1. Heat map of correlation coefficients for input and output variables.
Polymers 15 00308 g001
Figure 2. Histogram of frequency distribution for input and output variables.
Figure 2. Histogram of frequency distribution for input and output variables.
Polymers 15 00308 g002
Figure 3. The flow chart of MWOA.
Figure 3. The flow chart of MWOA.
Polymers 15 00308 g003
Figure 4. Schematic diagram of the ELM structure.
Figure 4. Schematic diagram of the ELM structure.
Polymers 15 00308 g004
Figure 5. The structure of RF model.
Figure 5. The structure of RF model.
Polymers 15 00308 g005
Figure 6. The structure of ELMAN.
Figure 6. The structure of ELMAN.
Polymers 15 00308 g006
Figure 7. Flow chart of the research process.
Figure 7. Flow chart of the research process.
Polymers 15 00308 g007
Figure 8. Taylor diagrams for the Training (a) and Testing (b) sets of the three ELM models.
Figure 8. Taylor diagrams for the Training (a) and Testing (b) sets of the three ELM models.
Polymers 15 00308 g008
Figure 9. Taylor diagrams for the Training (a) and Testing (b) sets of the three RF models.
Figure 9. Taylor diagrams for the Training (a) and Testing (b) sets of the three RF models.
Polymers 15 00308 g009
Figure 10. Taylor diagrams for the Training (a) and Testing (b) sets of the three ELMAN models.
Figure 10. Taylor diagrams for the Training (a) and Testing (b) sets of the three ELMAN models.
Polymers 15 00308 g010
Figure 11. Radar plots of metrics for the Training (a) and Testing (b) sets for MWOA-ELM, MWOA-RF, and MWOA-ELMAN models.
Figure 11. Radar plots of metrics for the Training (a) and Testing (b) sets for MWOA-ELM, MWOA-RF, and MWOA-ELMAN models.
Polymers 15 00308 g011
Figure 12. Taylor diagrams for the Training (a) and Testing (b) sets of the MWOA-ELM, MWOA-RF and MWOA-ELMAN models.
Figure 12. Taylor diagrams for the Training (a) and Testing (b) sets of the MWOA-ELM, MWOA-RF and MWOA-ELMAN models.
Polymers 15 00308 g012
Figure 13. Box line plots of the Training and Testing sets for MWOA-ELM, MWOA-RF and MWOA-ELMAN models.
Figure 13. Box line plots of the Training and Testing sets for MWOA-ELM, MWOA-RF and MWOA-ELMAN models.
Polymers 15 00308 g013
Figure 14. Sensitivity analysis of three models and experimental models.
Figure 14. Sensitivity analysis of three models and experimental models.
Polymers 15 00308 g014
Figure 15. Regression results for the Training (a) and Testing (b) sets of the MWOA-ELM model.
Figure 15. Regression results for the Training (a) and Testing (b) sets of the MWOA-ELM model.
Polymers 15 00308 g015
Figure 16. Comparison and error results between the Training (a) and Testing (b) sets of the MWOA-ELM model.
Figure 16. Comparison and error results between the Training (a) and Testing (b) sets of the MWOA-ELM model.
Polymers 15 00308 g016
Figure 17. Prediction results of the MWOA-ELM model for the literature [54] (Left) and [55] (Right).
Figure 17. Prediction results of the MWOA-ELM model for the literature [54] (Left) and [55] (Right).
Polymers 15 00308 g017
Figure 18. Prediction results of the MWOA-ELM model for the literature [57] (Left), [59] (Right, Up), and [58] (Right, down).
Figure 18. Prediction results of the MWOA-ELM model for the literature [57] (Left), [59] (Right, Up), and [58] (Right, down).
Polymers 15 00308 g018
Figure 19. Prediction results of the MWOA-ELM model for the literature [56].
Figure 19. Prediction results of the MWOA-ELM model for the literature [56].
Polymers 15 00308 g019
Figure 20. Prediction results of the MWOA-ELM model for the literature [22].
Figure 20. Prediction results of the MWOA-ELM model for the literature [22].
Polymers 15 00308 g020
Figure 21. Regression analysis result of the MRL model.
Figure 21. Regression analysis result of the MRL model.
Polymers 15 00308 g021
Figure 22. Regression analysis result of the MWOA-ELM and Mathematical model.
Figure 22. Regression analysis result of the MWOA-ELM and Mathematical model.
Polymers 15 00308 g022
Table 1. Statistical analysis of input and output variables.
Table 1. Statistical analysis of input and output variables.
MaxMinAverageMedianStddStde
Method2.001.001.69322.000.46120.4638
C·kg/m3457.00100.00350.69387.50123.16123.87
WR·kg/m35.820.000.620.001.341.35
W·kgm3/272.0035.00157.32161.9065.1965.57
W/C0.600.350.440.450.0760.078
FA·kg/m31360.00174.00862.761005.00365.72367.82
CA·kg/m31124.00 0.00504.95607.00 410.24412.59
Rubber Size5.000.001.47731.001.2151.222
Rubber Content kg/m3138.400.0035.8922.00 37.5537.77
DCI·10−12m218.551.078.399.743.883.90
Table 2. Parameter settings for the MWOA-ELM model.
Table 2. Parameter settings for the MWOA-ELM model.
ParameterSetting
Popsize30
Maxgen100
d1, d21 × 10−4
b1
Hiddennum layer28
Activation functionSigmoid
Table 3. Average of evaluation indicators for three ELM models.
Table 3. Average of evaluation indicators for three ELM models.
R2RMSEMAEMAPE (%)
ELMTrain0.96020.76910.62330.1132
Test0.64582.62321.45390.3509
WOA-ELMTrain0.98480.45180.34750.0619
Test0.93900.88100.65840.1155
MWOA-ELMTrain 0.99270.32870.21810.0353
Test0.99710.19110.13560.0212
Table 4. Parameter settings for the MWOA-RF model.
Table 4. Parameter settings for the MWOA-RF model.
ParameterSetting
Popsize30
Maxgen100
Forest size24
Number of leaves8
Number of cross-validation5
d1, d21 × 10−4
b1
Table 5. Average of evaluation indicators for three RF models.
Table 5. Average of evaluation indicators for three RF models.
R2RMSEMAEMAPE (%)
RFTrain0.653 2.24631.29720.2766
Test0.57682.57091.74080.4015
WOA-RFTrain0.96610.77090.56980.1009
Test0.87761.44091.00270.1941
MWOA-RFTrain0.98700.45200.31520.0495
Test0.93411.01640.65530.0962
Table 6. Parameter settings for the MWOA-ELMAN model.
Table 6. Parameter settings for the MWOA-ELMAN model.
ParameterSetting
Popsize30
Maxgen100
Hiddennum_best13
Number of cross-validation5
Activation functiontansig, purelin
Training functiontrainlm
d1, d21 × 10−4
b1
Table 7. Average of evaluation indicators for three ELMAN models.
Table 7. Average of evaluation indicators for three ELMAN models.
R2RMSEMAEMAPE (%)
ELMANTrain0.82751.65281.08200.1938
Test0.71082.11981.36090.2590
MWOA-ELMANTrain0.97830.57040.32070.0523
Test0.93900.88100.65840.1155
MWOA-ELMANTrain0.98830.41400.22610.0373
Table 8. Average of evaluation indicators for MWOA-ELM, MWOA-RF, and MWOA-ELMAN.
Table 8. Average of evaluation indicators for MWOA-ELM, MWOA-RF, and MWOA-ELMAN.
R2RMSEMAEMAPE (%)
MWOA-ELMTrain0.99270.32870.22810.0353
Test0.99710.1911 0.1356 0.0212
MWOA-RFTrain0.98700.4520 0.31520.0495
Test0.93411.01630.65530.0962
MWOA-ELMANTrain0.9883 0.41410.22610.0373
Test0.9698 0.68700.4867 0.0947
Table 9. Evaluation indicators for the MWO-ELM and MRL model.
Table 9. Evaluation indicators for the MWO-ELM and MRL model.
R2RMSEMAEMAPE (%)
MWOA-ELM0.99370.30640.20960.0325
MLR0.87151.39051.0880.2163
Table 10. Evaluation indicators for the MWO-ELM and Mathematical model.
Table 10. Evaluation indicators for the MWO-ELM and Mathematical model.
R2RMSEMAEMAPE (%)
MWOA-ELM0.99910.05900.02770.0109
Ye [91]0.85032.53262.21270.5486
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Wang, S.; Lu, T.; Li, H.; Wu, K.; Deng, W. Chloride Permeability Coefficient Prediction of Rubber Concrete Based on the Improved Machine Learning Technical: Modelling and Performance Evaluation. Polymers 2023, 15, 308. https://doi.org/10.3390/polym15020308

AMA Style

Huang X, Wang S, Lu T, Li H, Wu K, Deng W. Chloride Permeability Coefficient Prediction of Rubber Concrete Based on the Improved Machine Learning Technical: Modelling and Performance Evaluation. Polymers. 2023; 15(2):308. https://doi.org/10.3390/polym15020308

Chicago/Turabian Style

Huang, Xiaoyu, Shuai Wang, Tong Lu, Houmin Li, Keyang Wu, and Weichao Deng. 2023. "Chloride Permeability Coefficient Prediction of Rubber Concrete Based on the Improved Machine Learning Technical: Modelling and Performance Evaluation" Polymers 15, no. 2: 308. https://doi.org/10.3390/polym15020308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop