Prediction model of low cycle fatigue life of 304 stainless steel based on genetic algorithm optimized BP neural network

The low cycle fatigue life of 304 stainless steel is an essential basis for safety assessment. Usually, there is a complex nonlinear relationship between fatigue life and influencing factors, which is difficult to be predicted by traditional fatigue life models. Based on this, the BP algorithm and genetic optimization algorithm (GA) for the fatigue life prediction problem of 304 stainless steel is proposed. Based on the existing large amount of test data, the fatigue life of 304 stainless steel material is predicted by using BP and GA-BP learning models. The results show that the GA-BP prediction model is more flexible, the correlation coefficient R reaches 0.98158, the prediction data are within the 2 times error limit and closer to the ideal line, and the model prediction is better.


Introduction
304 stainless steel is one of the most widely used chromium-nickel stainless steel, from stainless steel cups and stainless steel lunch boxes that can be seen everywhere in people's daily lives to pressure vessels and aircraft turbine disks in industrial production [1][2][3][4]. These show that 304 stainless steel occupies an indispensable position in daily life and industrial production. According to statistics [4], 80% of the failure of 304 stainless steel comes from fatigue damage. In fatigue damage, low cycle fatigue has a relatively short fatigue life, so the study of the prediction of low cycle fatigue life is significant. At present, the most used prediction model for the analysis of low cycle fatigue life prediction of 304 stainless steel is the traditional empirical formula prediction method [5][6][7][8]. Traditional fatigue life prediction has excellent limitations, such as the variety of empirical formulas, low prediction accuracy, high experimental costs, and long prediction time, and the development of artificial neural networks provide a new possibility to solve these problems [9][10][11][12].
The artificial neural network is an artificial information processing system designed to imitate the structure and function of the human brain. It has strong self-learning, self-adapting, and self-organizing capabilities. It is very suitable for dealing with complex nonlinear systems. There are applications in the prediction of fatigue life of some materials [13][14][15][16][17][18][19][20][21]. However, there are few research reports on predicting low-cycle fatigue properties of 304 stainless steel by artificial neural network technology.
In this paper, the effects of different factors such as residual stress, strain amplitude, and stress amplitude on the low cycle fatigue life of 304 stainless steel were summarized from the available data. And using BP neural network technology, a BP neural network model was established to predict the low perimeter fatigue life of 304 stainless steel. In addition, for the problems of poor global search ability, slow convergence, and easily falling into the local optimum of the BP neural network, a genetic algorithm was used to optimize the weights and thresholds of the initial connection nodes of the classical BP neural network, which not only can improve the prediction accuracy of the model, but also can provide a reference value for life prediction. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
2. Effects of different factors on low cycle fatigue life of 304 stainless steel 2.1. Residual stress In different operational situations, it is necessary to improve the high-stress low cycle life of the material to meet the expected operating requirements, and surface treatment can effectively improve the fatigue life of the material [22,23]. Different surface treatments, such as wire brush hammering [24,25], deep rolling [26][27][28], shot peening [29,30], etc, are used to increase the residual compressive stress on the near-surface of the material to inhibit the emergence and expansion of surface fatigue cracks.
Figure 1 [26] shows the surface residual stress distribution of AISI 304 austenitic stainless steel when the surface treatment is deep rolling. It can be seen from figure 1 that a large amount of residual compressive stress is generated near the surface. The tensile and compressive stress of the material gradually tends to balance as the depth from the surface increases. Figure 2 [26]shows a detail of the deep rolling tool. As the deep rolling tool moves in the longitudinal direction while the specimen moves in the vertical direction, it can be seen that the residual stresses in the transverse and longitudinal directions of the material have changed. And it has been shown through numerous studies that the fatigue life of mechanical components under surface loading,  especially the mechanism of fatigue crack nucleation and first stage extension, is mainly controlled by their surface properties [31][32][33][34][35].

Strain amplitude Figures 3 [36]
and 4 [37] show the strain life curves. It can be seen from figure 3 that the number of cycles of the material decreases as the strain amplitude increases. The same trend is observed in figure 4 under asymmetric cyclic stress loading, while the fatigue life of the asymmetric stress-controlled cyclic case is significantly lower than that of the symmetric loading case for the same strain amplitude. This indicates that the ratchet deformation generated under asymmetric stress cycling causes additional damage, resulting in a reduction in the fatigue life of the material.   materials tends to decrease as the stress amplitude increases, but SUS 304 does not reduce as sharply as SUS 304N.

Stress amplitude
3. GA-BP neural network model 3.1. BP neural network BP neural network is a multilayer feed-forward neural network proposed by a group of scientists led by Rumelhart and McCelland in 1986 [39]. It contains an input layer, a hidden layer, and an output layer, and the hidden layer can be one or more layers. It has powerful functions of learning, associative, fault tolerance, and highly nonlinear function mapping.
For a three-layer BP neural network, let its input signal be Xi, the output of the hidden layer node be Yj, the output of the output node be Zk, the connection weight of the input layer node to the hidden layer node be Wij, the connection weight of the hidden layer node to the output layer node be Wjk, the threshold of the hidden layer node be θj, and the threshold of the output layer node be jk, then the output of the output layer node is: Where: n 1 is the number of nodes in the input layer; n 2 is the number of nodes in the hidden layer; f is the action function, and its form is usually a linear function, an S-shaped function, or a hyperbolic tangent S-shaped function.
The initial weights and thresholds of the network are given randomly. These are continuously adjusted and iterated according to certain algorithms during the training process to minimize the output error of the network.

BP neural network training algorithm
The training of the BP neural network is to output the data neural network according to the system's input so that the trained BP network can predict the input and output of the system. According to the existing 500 sets of input and output data [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][40][41][42][43][44][45][46][47][48][49][50][51][52], 400 sets of them are selected as the training data of the network, and the remaining 100 sets are used as the test data to verify the fitting ability of the network. The training function uses the fast convergence L-M optimization algorithm trainlm function with fast convergence, and the specific parameters are set as training times 100, training accuracy 0.00001, and learning rate 0.1. And it is usually necessary to normalize the sample data before the network is trained: Where: Xn denotes the normalized elements, Xnä[−1, +1]; X denotes the input data matrix elements.

BP neural network structure determination
According to Kolmogorvo's theorem, a 3-layer BP neural network (a BP network with only a single hidden layer is called a 3-layer BP network) can approximate any nonlinear continuous function with arbitrary accuracy. Therefore, in this paper, we choose a three-layer BP network. The actual problem determines the number of input/output neurons of the network. And too few or too many neurons in the hidden layer will not benefit the network performance, which is generally estimated using the empirical formula: Then, the corresponding network is constructed for training to select the optimal one. Where: l is the number of hidden layer units; m is the number of input units; t is the number of output units; a is a constant, aä [1,10].
In 304 stainless steel low circumferential fatigue, many factors affect fatigue life. This paper selects residual stress, strain amplitude, and stress amplitude as the network input variables and fatigue life as the network output variables. It establishes a neural network model with a 3-5-1 structure, as shown in figure 7.

Genetic algorithm optimization process
The genetic algorithm was proposed in the 1960s by Professor John Holland of the University of Michigan and is a natural adaptation optimization method [53]. The algorithm is a parallel stochastic search optimization technique designed to simulate nature's genetic mechanisms and biological evolutionary rules, with advantages such as a global search for excellence. The genetic algorithm process to optimize BP neural network includes population initialization, fitness function, selection operation, crossover operation, and variation operation [54]. And the algorithm flow is shown in figure 8. This paper applies the genetic algorithm to optimize the neural network prediction results by optimizing the parameters (interlayer weights and node thresholds) in the prediction function. This algorithm is a commonly used method for optimizing ordinary BP neural networks, and the prediction results obtained are generally better than those of unoptimized BP networks.

GA optimized BP network
Since the BP neural network uses the fastest gradient descent method to learn artificial neural network, and the initial weights and thresholds of the BP neural network are randomly generated, it is easy to fall into the optimal local solution during the training of the BP neural network, which makes the prediction error is large, and the generalization ability of the model is not strong [55]. Therefore, this paper uses the genetic algorithm to optimize the BP neural network.
Using the available experimental data, the model of the GA to optimize the BP neural network was established using MATLAB 2018b. The trainlm function of the L-M optimization algorithm was used as the training function in the BP network model, and the transfer functions of the hidden layer and output layer were tansing and purelin, respectively, with the training objective 0.00001. The population size was 30, the number of iterations was 100, and the crossover and variation probabilities were set as 0.3 and 0.1, respectively, in the GA optimization algorithm. According to the aforementioned genetic algorithm process, a MATLAB program was prepared to optimize the weights and thresholds of the BP neural network, and the average fitness of the population and the best individual fitness curve during the evolution process were shown in figure 9. The individuals of the population optimized by GA contain all the weights and thresholds of the neural network. With the known network structure, a neural network with a determined structure, weights, and thresholds can be constructed and thus can be used for simulation and prediction.

Experimental results and analysis
The BP neural network gains the optimized network weights and threshold parameters. And the 400 sets of data are selected as the training data to train the optimized network, and the remaining 100 sets are used as test data. The model reads the training data for learning, and when the training error index tends to be minimum or less than the specified threshold, the learning process ends with 100 iterations. And figure 10 shows the training convergence process. The neural network is validated using the test sample data, and the results are shown in figure 11.
It can be seen from figure 11 that the predicted value of neural network life has almost the same trend as the expected value [9][10][11][12][13][14][15][16][17][18][19][20][21], which verifies the accuracy of the prediction model. To more clearly see the optimization effect of the genetic algorithm, this paper compares the prediction results of the two models. And the correlation coefficients R of the prediction results are shown in figures 12(a)-(d), which shows that the optimized model works better. The correlation coefficient R of the prediction results obtained from the GA-BP model is 0.98158, and the correlation coefficient R of the test sample is 0.98163, both of which are close to 1, indicating that the model can accurately achieve the prediction of low-cycle fatigue life; while the correlation coefficient R of the BP neural network model is 0.86327, and the correlation coefficient R of the test sample is 0.85367, which has a larger prediction error. Figure 13 show the relationship between the experimental results and the prediction results of the BP model and the GA-BP model in the test samples, respectively. It can be seen that most of the predicted data in the BP prediction model are within the 2-fold error bound [19][20][21], and 5 groups of predicted data are outside the 2-fold error bound; while all the predicted data in the GA-BP prediction model are within the 2-fold error bound and closer to the ideal line. From the analysis of the above results, it is clear that the genetic algorithm effectively  optimizes the BP neural network and can meet the needs of engineering applications. As the sample set increases, the prediction accuracy of the network can be further improved.
From figures 12 and 13, it is clear that the simulation effect of the GA-BP network model is much better than that of the BP network. The main reason is that the BP neural network training algorithm randomly selects the initial values of the network, so it is easy to fall into local minima. In contrast, the GA-BP network first uses the genetic algorithm to determine the better initial values of the network in the global range, which largely eliminates the possibility of local minima and obtains a better network performance, resulting in a network with strong generalization ability.

Conclusion
(1) This paper analyzes and summarizes the three influencing factors of 304 stainless steel low cycle fatigue liferesidual stress, strain amplitude, and stress amplitude. And based on the existing test data, this paper  (2) Due to the randomness of its initial weights and thresholds and the limitations of its algorithm, the traditional BP neural network tends to fall into local optimum solutions, and the model generalization ability is not strong. Having the characteristics of an 'evolution' algorithm, a genetic algorithm to optimize the initial weights and thresholds, the combination of the two can give full play to the algorithm's advantages and improve the accuracy of the model.
(3) This paper used 400 sets of existing experimental data to establish learning samples. In order to test the accuracy of the model, the last 100 sets of data were used as test samples without learning training.
Comparing the prediction results of the two models (BP and GA-BP), it can be seen that the optimized model (GA-BP) has a better prediction effect, and the correlation coefficient R(0.98158) of the prediction results is closer to 1. The prediction data are within the 2-fold error limit and closer to the ideal line. The results show that the genetic algorithm is effective for the optimization of the BP neural network.
(4) In future research, the sample set can be further expanded to improve the learning effect of the model on the samples, and a more perfect low-cycle fatigue life prediction model of 304 stainless steel can be trained.

Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).